tencent cloud

TDSQL Boundless

Release Notes
Product Introduction
Overview
Scenarios
Product Architecture
Instance Types
Compatibility Notes
Kernel Features
Kernel Overview
Kernel Version Release Notes
Functionality Features
Performance Features
Billing
Billing Overview
Purchase Method
Pricing Details
Renewal
Overdue Payments
Refund
Getting Started
Creating an Instance
Connect to Instances
User Guide
Data Migration
Data Subscription
Instance Management
Configuration Change
Parameter Configuration
Account Management
Security Group
Backup and Restoration
Database Auditing
Tag Management
Use Cases
Technical Evolution and Usage Practices of Online DDL
Lock Mechanism Analysis and Troubleshooting Practices
Data Intelligent Scheduling and Related Practices for Performance Optimization
TDSQL Boundless Selection Guide and Practical Tutorial
Developer Guide
Developer Guide (MySQL Compatibility Mode)
Developer Guide (HBase Compatibility Mode)
Performance Tuning
Performance Tuning Overview
SQL Tuning
DDL Tuning
Performance White Paper
Performance Overview
TPC-C Test
Sysbench Test
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Security Group APIs
Task APIs
Backup APIs
Rollback APIs
Parameter APIs
Database APIs
Data Types
Error Codes
General Reference
System Architecture
SQL Reference
Database Parameter Description
TPC-H benchmark data model reference
Error Code Information
Security and Compliance
FAQs
Agreements
Service Level Agreement
Terms of Service
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

V21.2.x

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-03-26 15:13:52

V21.2.4

Version Release Notes

Operations

Added dumping coroutine stack information when memory monitoring is triggered.
To assist in diagnosing issues caused by excessive memory usage or suspected deadlocks, this update added a feature that synchronously dumps the stack information of all Goroutines when the Agent's memory monitoring reaches the threshold and prints heap memory (heap) information. This helps developers analyze whether memory leaks or heartbeat pileups are caused by coroutine blocking or deadlocks, thereby improving troubleshooting efficiency.
Enhanced SST File Monitoring Capability
Added a method to query the number of small SST files, supporting the display of the number of small SST files and the total number of SST files in a specified tier. The threshold for identifying small SST files refers to RocksDB's compaction_merge_small_file_trigger_ratio configuration, facilitating Ops personnel to monitor and optimize storage layer performance.
Optimized Slow Log Storage Path
Migrate the default path of slow logs from the data disk to the log disk. This resolves the issue where slow log accumulation during large-scale data imports such as Bulk Load occupies data disk space, leading to inaccurate disk usage assessment by MC and seemingly uneven table distribution.
Enhanced Parallel Query Error Information
Enhanced the error information when SQL parallel execution fails by adding necessary error node details. This resolves the issue in multi-node instances where parallel query failures only display "read data from remote node error" without identifying the specific problematic node, thereby improving troubleshooting efficiency.
Enhanced DDL Monitoring Metrics
Added two monitoring metrics, Ddl_count (number of executed DDLs) and Ddl_failed_count (number of failed DDLs), enabling DDL failure rate alarm calculation to facilitate timely detection and handling of DDL execution exceptions.
Optimized Archive Task Cleanup Policy
Optimized the MC archive task cleanup policy by configuring retention days and retention entries as two independent thresholds. This addresses the issue of high latency in Range requests caused by excessive archive tasks, thereby improving MC operational efficiency.
Dynamic Loading of Parameter Templates
The frontend parameter template dynamically reads the actual kernel's parameter default values and value ranges. This resolves the issue where the default values and ranges displayed on the frontend do not match those in the actual kernel, preventing users from mistakenly believing that parameters have been modified and thereby enhancing user experience.
transaction_isolationParameter Exposure
Added the transaction_isolation field to the TDStore parameter configuration database parameters, enabling users to configure transaction isolation levels via the console to meet customer needs for visualized parameter management.

Database management

Optimized mc-ctl command-line tool to enhance usability.
This update introduced multiple optimizations to the mc-ctl command-line tool: 1) Added the cluster gv delete-single command to delete a single global variable with more intuitive syntax; 2) Added the cluster gv-old get and cluster gv-old delete-single commands to support querying and deleting operations for global variables in the old format (JSON); 3) Added the show node_role command to facilitate users in viewing the enumerated values of node roles.
Replica Type Resizing Capability
Support reconfiguring existing instances in the production environment from all fully-functional replicas to a hybrid deployment mode combining fully-functional replicas and log replicas. This meets replica configuration requirements across different business scenarios, thereby enhancing resource utilization.

Scalability and Performance

Optimized the DDL execution path for large partitioned tables to reduce cross-node RPCs
When DDL statements (such as CREATE/DROP TABLE) or DROP DATABASE operations are executed on large partitioned tables with a massive number of partitions, each partition requires interaction with the Data Dictionary (DD). If the execution node and the Leader of the system replica group (sys rg) are not co-located, this generates substantial cross-node RPC calls. This optimization modifies the routing logic for such DDL statements, prioritizing their execution on the node where the sys rg Leader resides. This significantly reduces network overhead and enhances DDL execution efficiency.
Batch DELETE/UPDATE Performance Optimization
By default, the tdsql_stmt_optim_batch_delete and tdsql_stmt_optim_batch_update parameters are enabled, activating optimization capabilities for batch DELETE and UPDATE statements to enhance execution efficiency of large-scale data modification operations.
Partitioned Table Metadata Memory Optimization
Optimize the memory overhead of Table Cache in partitioned table scenarios. By sharing relevant structures between partitions and delaying memory allocation, this significantly reduces the memory footprint in large-scale partitioned table environments.
Splitting Scenario Slow Query Optimization
Optimized the handling logic when RG splitting occurs during transaction execution. Upon receiving EC_TDS_TRANS_PREPARE_NEED_MORE_PART from a participant, the coordinator immediately sends Prepare requests to already Ready participants to refresh the participant list, thus avoiding slow queries caused by waiting for the next retry cycle (default 1 second).
Leader Switchover Scenario Slow Query Optimization
Optimized the speed at which SQLEngine detects the new Leader during RG leader switchover scenarios. Fixed the issue where the Lease::CheckLease function returned inaccurate error codes when the RG state was unwritable. It now ensures the return of EC_TDS_TRANS_REP_GROUP_NOT_LEADER and includes the recommended_retry_connection, enabling SQLEngine to promptly identify the new Leader and avoid routing delays exceeding 2 seconds.
Merge Scenario Slow Query Optimization
Optimized deadlock issues in concurrent scenarios involving TPCC and RG merging. When HandleBlockParticipantPrepareForMergeTrans fails to obtain the participant lock, it proactively aborts lock waiting to avoid slow queries caused by lock wait timeouts (default 50 seconds).
Proxy Executor Memory Optimization
Optimized the memory release mechanism for Proxy Executor. When OnConnClose is triggered, a background coroutine is initiated to immediately perform the release operation, instead of relying on periodic cleanup by background threads. This prevents OOM risks caused by delayed memory release in scenarios with short-lived connections and high concurrency.
Single-row INSERT Batch Optimization
Optimized performance for single-row data insertion scenarios. When data and multiple indexes reside on the same node, Batch Insert is used to combine data and index writes into a single RPC call, reducing network overhead.
Compaction Compression Algorithm Optimization
Optimized the compression algorithm selection policy for SST files output by Compaction. It now considers not only the compression algorithm settings of the target layer but also the compression algorithms of input SST files, ensuring the output compression level is not lower than the input. This resolves the space inflation issue caused when ZSTD-compressed SSTs are ingested into L0/L1 layers during RG migration and subsequently degenerate to no compression during Compaction.
L0 Layer Write Control Optimization
Added Soft/Hard Limit parameters for L0 layer data volume to control write throttling and blocking behavior. Resolved the issue of mis-throttling of writes caused by overestimation of estimated_pending_compaction_bytes, while controlling the additional disk space required for Compaction from L0 to L1 layer through L0 layer size limitation, thereby enhancing the stability of write performance.
Adaptive Parameter Tuning for Large-Memory Instances
Dynamically set the tdstore_data_db_table_cache_numshardbits and tdstore_block_cache_num_shard_bits parameters based on instance memory size. Set Block Cache size to 6 when below 64GB, 7 for 64-128GB, 8 for 128-256GB, 9 for 256-512GB, and 10 for above 512GB, optimizing cache performance for large-memory instances.
INSERT ON DUPLICATE KEY UPDATE Performance Optimization
Optimized the write performance of the extended syntax INSERT INTO... AS new ON DUPLICATE KEY UPDATE. Resolved the significant performance gap compared to REPLACE INTO, supporting Batch optimization to improve bulk write efficiency.
Force BKA Optimization Switch
Enable the optimizer_switch to activate the force_batched_key_access option. By default, this enables the forced Batched Key Access (BKA) join optimization to enhance the performance of multi-table JOIN queries. During testing, focus on: multi-table joins, combinations of different JOIN types, adjustments to join_buffer_size, related optimizer switch combinations, and Parallel Hints.
Automatic Proxy Forwarding for I_S System Views
Information_Schema system view queries are automatically forwarded via Proxy to System RG nodes for execution. Since I_S views primarily involve multi-table NLJ queries, executing them on the System RG node is more efficient, reducing cross-node RPC calls.
Enhanced Handling Efficiency of Write Fence during Region Split/Merge Processes
Optimized the handling of Write Fence during Region split and merge operations in scenarios with a large number of Regions or frequent Write Fence operations. By refining internal processing logic, this enhancement reduces latency for related operations, improves the processing efficiency of dynamic data shard adjustments (splitting/merging), and thus strengthens cluster resilience and responsiveness to data growth and workload fluctuations.
By default, disable the optimizer option skip_scan to avoid potential negative optimization.
Based on production environment practices and POC test feedback, the optimizer option skip_scan may cause performance degradation (negative optimization) in certain query scenarios of TDSQL Boundless. This optimizer switch is disabled by default.

Stability

Upgrade Compatibility Enhancement
Fixed an issue where Sys_var_flagset-type parameters (such as parallel_query_switch, optimizer_switch) caused node startup failures due to incompatibility before and after version upgrades. Added fault-tolerant handling in the configuration file reading path consistent with the SET statement path, ensuring a smooth upgrade process.
Optimization of Critical RPC Throttling for MC
MC's throttling mechanism does not restrict critical RPCs such as heartbeats from SQLEngine and TDStore by default, ensuring that NodeHeartbeat and EngineHeartbeat remain unaffected by throttling in high-load scenarios, thereby enhancing system stability.
MC Modular Operation Capability
MC supports running only specified working modules in extreme failure scenarios. You can specify modules/coroutines to be prohibited or allowed to start via configuration files, and support pausing and resuming designated coroutines at runtime, ensuring critical features such as the Timestamp Service can still be independently recovered during failures.
Enhanced MC Cluster Management Capability
Optimized the History Job retrieval logic during instance cloning to prevent MC OOM caused by loading excessive Jobs from ETCD at once. Adopted a batched loading policy, pre-filtered unnecessary Jobs, and nullified certain fields to reduce memory consumption, enhancing stability in large-scale job scenarios.
Optimized RG Restart Retry Policy
Optimized the restart logic for Error RGs by changing fixed retry attempts to exponential backoff retries, persistently retrying until successful startup. This addresses replica startup failures caused by metadata write failures exceeding retry limits in disk-full scenarios, enhancing the system's self-healing capability.

Syntax and Features

Optimized Parallel Query Permission Inheritance
Worker threads executing parallel queries inherit the SKIP_DD_ACCESS_CHECK permission, ensuring proper access to the data dictionary during parallel execution and preventing execution failures caused by permission checks.
Non-blocking and Preemptive DDL
Implemented the Nonblock DDL and Preemptive DDL features. Nonblock DDL allows new transactions to access target tables even when MDL-X locks cannot be acquired; Preemptive DDL proactively terminates long-running transactions holding S locks. Controlled via Session variables, this enhances the flexibility and success rate of DDL execution.
Enhanced Lock View Readability
Enhanced the readability of the TDStore data_locks system view by adding database names and table names to the displayed information.
Online Copy Table Parallel Verification
Online Copy Table adapts to the SELECT COUNT operation under Parallel Query (PQ) logic when the row count consistency between old and new tables is verified, improving verification efficiency for large-table DDL operations.

Data Migration

TiDB to TDSQL Boundless Migration Tool
A tool that implements one-click migration from TiDB to TDSQL Boundless based on BR backup files. Supports converting TiDB BR backup files to SQL/CSV format and importing them into TDSQL Boundless clusters via Bulk Load. Verified for TiDB versions including v6.1.7, v6.5.12, v7.1.5, v7.5.3, v8.1.1, and v8.5.0.

Restoring via Backups

Enhanced COS Access Stability for Bare-Metal Cluster Backup and Recovery
Enhanced the backup and recovery (TDBR) feature for bare-metal (enhanced local disk) clusters by adding the capability to access COS object storage through the Polaris (North Star) service discovery component. This solution replaces the original internal domain access method and intelligently load balances COS service requests, preventing pressure from concentrating on a small number of COS nodes during traffic spikes. Thus, it significantly enhances network stability and success rates for large-scale backup and recovery operations in IDC environments.

Fixed issues

Fixed an issue where, in partitioned tables using SUBPARTITION BY KEY, the query optimizer incorrectly pushed down the entire GROUP BY operation to the storage layer when the columns in the GROUP BY clause did not fully match the index grouping columns. This could result in incorrect grouped aggregation results.
Fixed an intermittent query jitter issue in the row-based storage engine caused by default parameter configurations of the Jemalloc memory allocator (such as background_thread not enabled). In specific memory allocation scenarios, internal calls to the madvise system call by Jemalloc could take over 1 second, causing severe delays in simple operations like point queries and point updates.
When persisting full-table statistics for partitioned tables (Persist), the system may only aggregate statistics for a subset of partitions.
Fixed a race condition in TDStore under specific concurrency scenarios (such as during rollback processes after failed Split operations), caused by unsynchronized access to the internal linked list in the pessimistic lock manager (TDRequestPessimisticTransLockManager). This issue could trigger null pointer dereference, ultimately leading to node crashes (Core).
Fixed an issue where, in scenarios with a large number of indexes, the frequently accessed hot tables' range statistics (range stats) were frequently evicted and reloaded due to cache capacity limits, causing performance degradation. This fix introduces a feature to specify a list of hot tables via a parameter. When the FIFO eviction policy is used, the range statistics for indexes of these tables will be "pinned" (pin) to avoid removal by the regular eviction mechanism.
Fixed an issue where the original capacity balancing trigger thresholds (relaxed-usage-diff-ratio and strict-usage-diff-ratio) were set too low. On nodes with smaller disk capacities, this could trigger unnecessary balancing schedules, disrupting the default data distribution of Hash/Key partitioned tables and subsequently affecting query performance.
Fixed an issue where a redundant get_record operation existed in the inner table data read path when partitioned tables performed Batched Key Access (BKA) Join. This redundant operation caused unnecessary performance overhead.
Fixed two issues where calling the ScatterPartition interface for partition scattering did not work as expected: 1) The scattering logic was based on Primary Replica Groups (RGs), but in version 21.2.3, Log-Only nodes (which do not host Primary RGs) were not handled correctly, resulting in incomplete scattering; 2) The constraint requiring partition indexes to be placed with their corresponding data partitions had implementation flaws, causing the scattering results to fail to meet the expected distribution.
Fixed an issue where single-table aggregate queries selected incorrect indexes. When a query already uses a covering index, the optimizer no longer incorrectly switches to an ordering index, preventing query slowdowns caused by secondary index bookkeeping access.
When an equality condition falls within a single range and data skew exists, linear estimation may lead to underestimated row counts.
When the number of rows for ranges is estimated, different indexes sharing the same prefix exhibit significant deviations in their estimations of the number of rows.
Fixed an issue where retry jobs appeared as redundant entries in the HistoryJobList during snapshot recovery. Since the original job corresponding to the retry job was already in the history list, retry jobs should be filtered out to prevent redundancy and avoid failures in subsequent TDBR operations.
Fixed an issue with the status determination of Bulk Load split tasks. When multiple tables in a Replica Group (RG) are undergoing Bulk Load imports, the Master Controller (MC) no longer prematurely releases scheduling restrictions for that RG upon completion of a single table. Instead, it waits until all tables finish importing before lifting the scheduling ban.
Fixed an issue where the client did not return error information when the BatchPut RPC failed due to lock timeout.
Fixed an issue with excessive migration scheduling by the MC. After a Destroy Replica task is completed, the MC must wait until the node receives a new heartbeat before considering it as a source (Src) node for migration. This prevents duplicate migration tasks due to heartbeat delays.
Fixed an issue where Merge Empty RG operations did not verify replica role consistency. Added a replica role check before empty Replica Groups are merged to ensure Voter/Learner configurations match between both RGs before the merge is performed.
Fixed an issue in pthread mode where futex timed wait failed to properly advance the elapsed waiting time after spurious wakeups.
Fixed an issue where RG cache on nodes was not cleared after the Passive Abort Stage is completed during full-machine migration. After migration rollback enters the Passive Abort Stage, RG records in node caches must be synchronously cleared.
Fixed an issue where nodes hung during startup when an upgrade is performed directly from version 19.1.x to 21.2.3 due to missing sys tables. Adjusted forward-compatible version checks to follow the new sys table upgrade procedure starting from version 19.2.x.
Fixed an issue where row-based storage nodes encountered Core dumps during the GenericCleanupData process. This occurred when PhysicallyDeleteRange executed successfully but data within the Range was not emptied.
Fixed an issue where inconsistent criteria for Region splitting and merging caused frequent scheduling. When a Region has a small size but contains a large number of Keys (such as 11MB with 7 million Keys), the criteria for splitting and merging are standardized to prevent repeated scheduling triggers.
Fixed an issue where TDRlogBackuper hung during file descriptor closure, causing prolonged non-exit. Adopts the standard closing procedure by performing shutdown before close to prevent hangs caused by residual data in buffers.
Fixed an issue where the Fix Offline function persistently reported errors in single-replica instances. Before the Offline status is processed, the Leader Commit ID is now verified to prevent skipping processing logic directly in single-replica scenarios.
Fixed an issue where the No Range Lock feature was incompatible with Batch Update scenarios. Added secondary index change verification and fault-tolerant handling for back-to-table record fetching in the Batch Update optimization path to ensure data consistency during update operations.
Compute nodes aggregate routing RPCs, reducing the number of routing RPCs and alleviating MC CPU pressure in high-concurrency scenarios.
Fixed an issue where UPDATE statements could violate uniqueness constraints on unique indexes containing VARCHAR fields. Corrected the scan range calculation logic for uniqueness verification to prevent reduced verification scope caused by string length changes.
Fixed an issue where LogService hung during constraint relationship creation replay. Resolved replay blocking caused by the inability to locate the data object route created by the next pending replay task.
Fixed an issue where accessing GetRegion during MultiScan re-aggregation failure caused null pointer crashes. Added null pointer checks in the GetRegion function to return nullptr instead of accessing a null pointer when multi_range is empty.
Fixed an issue with out-of-order log replay in TDStore's Replay Barrier mechanism. Resolved a problem where during concurrent replay of multiple Barrier Logs, a subsequent Barrier Log might prematurely release the state lock after the completion of a prior one, allowing other logs to execute concurrently with Barrier Logs.
Fixed an issue where TDStore Client calls during MC leaderless state caused Client tool Core Dumps. Resolved crashes caused by array out-of-bounds during MC leader switch.
Fixed an issue where Migrate tasks reported errors during snapshot recovery. Replaced the direct use of the Leader from RG Meta with traversing replicas to locate the Leader, ensuring Migrate tasks correctly obtain the source node.
Fixed an issue where a large number of remaining deleted__async-cleanup directories in the raft_data directory caused frequent Core dumps when the Agent executed du -cshm.
Fixed an issue where a stuck asynchronous Clean Job caused CheckConflictWithCleanupDataTask to persistently detect Range Overlap. When the asynchronous cleanup task of a Destroy Job remains incomplete for an extended period, it will no longer block subsequent migration tasks.
Fixed the issue of uneven distribution of LogReceiver across multiple CDC nodes. MC added a goroutine that periodically calls the CDC balancing API, and simultaneously modified LogService parameters to restrict the memory usage of the MySQL Client on CDC nodes, avoiding OOM caused by a single CDC node carrying too many LogReceivers.
Fixed an issue where Offline Leader reported errors during snapshot recovery replay of single-replica migration jobs. In single-replica migration scenarios, snapshot recovery does not replay Transfer Leader jobs. When a Migrate Offline Leader is detected, a leader switch must be initiated first.
Fixed an issue where Item_int_with_ref fails to implement Parallel Safe, causing crashes during parallel execution of certain SQL statements.
Fixed an issue where parallel query execution of the CONVERT(SUBSTR()) function returned inconsistent results under the UTF8 character set. Ensures that parallel execution returns the same string content as serial execution.
Fixed the issue of inaccurate Proxy error messages to improve troubleshooting efficiency.
Fixed an issue where the PutTimestampToEtcdLoop goroutine exited abnormally in timestamp sinking scenarios. Changed the break to continue when the goroutine fails to obtain timestamps in the goroutine to ensure continuous operation.
Fixed the issue where the mc_rg_state_leader_transfer_high_frequency_gauge metric could not be reset correctly. The reset logic was originally implemented in AddLeaderHistoryLocked and would only reset after the next leader transfer occurred. This has been fixed to allow proper resetting.
After a Resource Group (RG) is deleted, its corresponding raft_data directory is no longer renamed and retained but directly deleted.
Fixed an issue where Proxy always selected the first partition's RG Leader as the forwarding target in partitioned table scenarios. After performing partition pruning based on SQL-specified partitions or WHERE conditions, it now correctly selects the RG Leader node of the corresponding partition for forwarding, fully leveraging the Local Scan optimization.
Fixed the issue where RG_JOB_TYPE_DELETE_REGION_IN_RG tasks could not be scheduled for execution when the disk approached full capacity. Allows writes to bypass the cached write state for Local Meta CF, ensuring space-releasing operations such as Drop Table can execute normally as soon as possible.
Fixed the issue where Drop Table operations get stuck after the instance becomes read-only. Resolved the verification failure caused by Write Fence version mismatch (SQLEngine sent version 1, while TDStore recorded version 2).
Fixed the issue where TDBR backups persistently reported errors due to the CDC node Agent not starting. The TDStore backup logic now adapts to CDC nodes, Log-only nodes, and Columnar nodes, ensuring the Agent status on these nodes does not affect the backup process.
Fixed the issue where dbms_admin.show_variables displayed empty Candidate Values when querying character set-related variables, and also resolved the problem where large numerical values were displayed in scientific notation.
Fixed the issue of occasional data dictionary initialization failure during instance creation. Resolved the problem where node initialization failed due to Permission Denied errors when TDStore User RG saved Meta.
Fixed two issues with the retry mechanism during full backup recovery (TDBR) when network congestion caused frequent SST file download timeouts (such as context deadline exceeded) from COS: 1) The number of retries was fixed at 10 times and not configurable; 2) The retry interval used a fixed random value between 20-30 seconds, which proved ineffective under persistent network congestion. This could cause download tasks to fail after reaching the retry limit, subsequently causing the entire backup recovery process to stall.
Fixed an issue where, in replica groups (RGs) with a large number of Regions, the MC-internal goroutine responsible for merging primary keys held read locks for extended periods while traversing Region metadata. This caused read-write lock contention with the periodic Leader-checking goroutine (checkLeader), leading to deadlocks. After deadlock occurred, heartbeat packet processing were blocked and accumulated, ultimately triggering MC out-of-memory (OOM) errors.
Fixed an issue in MC's scheduling logic when multiple partitioned tables with implicit affinity are created. When the second table is created, the logic skipped checking prefer leader and data protection (DP) rules due to detected affinity, while the target replica group (RG) might have been in the process of merging and thus unavailable for creation. This caused subsequent partitioned tables' Leaders to concentrate on the same node without constraint, failing to achieve the intended scatter distribution.

Data dictionary change

Change Type
Data Dictionary
Description
Modification
Added descriptions for OBJECT NAME and OBJECT SCHEMA.

V21.2.3

Version Release Notes

Database management

Increase the default value of the read-only parameter max_digest_length to accommodate longer SQL statement fingerprints
Increased the default value of the SQL statement summary (Digest) length parameter max_digest_length from 1024 to 10240 to enable Outline to support longer SQL statements.
Increase the default value of the parameter range_stat_maximum_scanned_partitions to improve range number of rows estimation performance for large partitioned tables
Increasing this value enables scanning more partitions during range number of rows estimation to obtain more accurate statistics, aiming to strike a balance between estimation precision and optimizer estimation time.
Refined SST Boundary Alignment Policy
Enhanced the SST boundary alignment policy by adding support for setting a minimum split file size to mitigate the issue of excessive small SST files when the boundary alignment policy is enabled. The default value is 4MB, meaning that only SST files larger than 4MB output by Compaction will be split.
Optimize KEY Partitioning Default Algorithm Configuration
Adjusted the default distribution algorithm for KEY partitioned tables to improve data distribution uniformity and query efficiency. Note: For KEY partitioning, algorithm=1 originally represented the KTY_51 algorithm, but now represents the MURMURHASH algorithm. The original default algorithm=2 for KEY partitioning indicated KEY_55, while the current default algorithm=1 uses MURMURHASH.
Optimized Migration Scheduling Policy
Improved the migration scheduling policy by optimizing node selection logic based on bvar metrics to prevent performance issues caused by I/O resource contention.

Scalability and Performance

Fixed the sorting logic for the partition ID list ordered by number of records in the partitioned table processing logic
Fixed that in partitioned tables, m_part_ids_sorted_by_num_of_records was not sorted in descending order by the number of rows in the subpartitioned tables.
Optimized RangeSearch implementation to reduce memory copying and redundant computations, thereby lowering overhead
During the Range Search process, it directly calculates the number of rows within the Range to avoid the performance overhead of generating large intermediate results.
Optimized RPC Call Timeout Retry Mechanism
Added an RPC maximum threshold variable to enforce a limit on RPC retries, preventing excessive bthread resources from being occupied by ineffective retries for extended periods when target services are unavailable. This enhances overall resource utilization efficiency and system stability under high-load or partial-failure scenarios.
Optimized for point lookup scenarios in partitioned tablesrec_per_keycalculation performance of statistical information
Optimized the overhead of calculating the average number of records per key (rec_per_key) in queries such as point lookups for tables with a large number of partitions (such as over 1,000 partitions).
Optimized the execution efficiency of DDL operations for large partitioned tables
The system has optimized the interaction pattern with the Data Dictionary (DD) when executing DDL operations on partitioned tables with a large number of partitions. By routing DDL operations to the node where the system resource group (sys rg) Leader resides, this approach eliminates the substantial cross-node RPC calls, thus significantly reducing latency during schema modifications for large partitioned tables.
Optimized the NDV estimation algorithm
Adopted the Duj1 algorithm to optimize the estimation accuracy of overall NDV (number of distinct values), reducing statistical errors.
Optimized Hotspot Scheduling Policy
Optimized the hotspot scheduling policy, enabling the splitting of multiple tables for concurrent scheduling to enhance system load balancing capability.
Optimized Compaction Thread Count Configuration
Optimized the Compaction thread count configuration when local disks are used by adjusting the correction factor from 2x to 1.5x, balancing CPU usage and I/O performance.
Optimized SQL Forwarding Mechanism
Optimized the SQL forwarding mechanism by addressing issues such as error logs, status settings, and timeout control, enhancing forwarding performance and stability.
Optimized BulkLoad Parameter Update Mechanism
Optimized the mechanism for batch updating BulkLoad options, ensuring that parameter settings are correctly restored after data import completion and effectively reducing pending compaction bytes.
Enhanced System Table Index Statistics Management Capabilities
Supported reloading system table index statistics into memory to prevent frequent triggering of recalculations, thereby enhancing execution plan stability.
Optimized Routing Acquisition Scope for BatchPut Aggregation Operations
Optimized the routing aggregation logic in batch write (BatchPut) operations. This resolves the issue of excessively large routing acquisition scopes when operations involve a large number of tindex IDs (for example, partitioned tables across multiple partitions or long-delayed execution of ADD INDEX), avoiding unnecessary routing information retrieval and enhancing batch data write efficiency.
Optimized Backup and Recovery Performance
Optimized the slow recovery issue in the Weimob offline MySQL scenario, enhancing data recovery efficiency.
Reducing setup_read_decoders Function Call Frequency
Optimized the execution logic of correlated subqueries, reducing the proportion of overhead incurred by the setup_read_decoders function during query execution.
Optimized RPC Call Performance for UPDATE Statements
Optimized the execution flow of UPDATE statements in the SQLEngine module by batching index conflict checks and storage-layer write operations originally performed per row. This significantly reduces high-frequency, small-volume RPC network calls. The optimization improves execution efficiency for UPDATE statements involving multi-row updates. Disabled by default in production; enable as needed via the tdsql_stmt_optim_batch_update switch.
Optimized Log Receiver COS Link Performance
Optimized the COS link performance of Log Receiver by reducing redundant downloads of Raft Log files, thereby decreasing IO resource consumption and improving log retrieval efficiency.
Optimized SST File Size Parameter Configuration
Adjusted the default value of the target_file_size_multiplier_additional parameter from 1:1:1:2:2:4:8 to 1:1:1:2:2:2:2. When user_cf_target_file_size_base = 32M, under the new default value, the upper size limits for SST files from L1 to L6 are: 32M, 32M, 64M, 128M, 256M, 512M, making the SST sizes in lower layers more reasonable.
Optimized Parallel Query Executor Startup Performance
Improved the execution efficiency of StartExecutor under network pressure, reducing latency fluctuations and enhancing query stability.
Disabled LIMIT Parallel Execution in Specific Scenarios
For simple LIMIT clauses without ORDER BY / GROUP BY, executing in parallel incurs additional overhead. Parallel execution is disabled by default. If parallel processing is required, add the Hint /*+ parallel(n) */ to the SQL statement.
Optimized Block Statistics Cache Eviction Mechanism
Refactored the cache eviction policy to resolve the memory footprint issue of invalid statistics under FIFO and LRU modes, enhancing memory utilization efficiency.
Extended GetSmallRange function to support remote invocation
Enhanced the GetSmallRange function to support execution on remote nodes, laying the foundation for further optimization of parallel queries.
Optimized Early Release Mechanism for Row-Level Locks
Improved the row-level lock management logic to align with MySQL behavior. Under specific switch-enabled scenarios, it now releases row-level locks that do not meet filter conditions in advance, enhancing system concurrency processing capabilities.
Unified Exchange to Non-Blocking Mode to Improve Query Performance
Resolve data skew among Workers in parallel queries, avoid blocking and waiting, and improve overall execution efficiency.
Optimized Thomas Write Parallel Backfill Policy
Adopting Small Range instead of Region for task partitioning to enhance concurrency efficiency and load balancing of data backfill.
Optimized Statistics Sampling Performance
Optimized the FindSampleKey function to reduce CPU overhead.
Improved Range Estimation Algorithm
Optimized the handling logic for Gap Ranges in statistics to prevent severe underestimation of row counts caused by skipping Gap Ranges.
Optimized GetRegionsByKeyRange query performance
Optimizing GetRegionsByKeyRange query performance for production instances with large numbers of Regions to address CPU saturation issues and enhance system stability.
Enhanced Concurrent Processing Capability for BulkLoad and RG Jobs
Optimized concurrency control between BulkLoad imports and RG Jobs by restricting their concurrency at the table-level granularity. RG Jobs will be prohibited for the RG to which the BulkLoad-imported table belongs, while other RGs not involved in BulkLoad imports are allowed to initiate RG Jobs.
Optimized SHOW INDEX command execution performance
Optimized the execution efficiency of the SHOW INDEX command in cross-AZ scenarios to improve query response speed.
Optimized Concurrency Control for BulkLoad and DDL Operations
Optimized the mutex granularity between BulkLoad imports and DDL operations to enhance system concurrency. Improved concurrency control for BulkLoad imports and DDL operations by restricting their concurrency at the table-level granularity. DDL operations will be prohibited for tables undergoing BulkLoad imports, while other tables not involved in BulkLoad imports are allowed to initiate DDL operations.
Optimized the read efficiency of partition policy metadata in DDL operations
Optimized the read efficiency of partition policy metadata in DDL operations by reducing metadata access frequency through methods such as cache optimization, thereby enhancing DDL execution performance.
Optimized Log Replicas CLB Policy
Optimized the load balancing policy for log replicas by using write speed, estimated log size, and number of replicas as load metrics to improve system resource utilization efficiency.
Optimized DELETE Operation Performance
Improved the execution efficiency of DELETE operations by enhancing the performance of batch deletion through the BatchDelete RPC. This feature is disabled by default in production environments, controlled by the tdsql_stmt_optim_batch_delete parameter.
Optimized Unlock Performance for Large Transaction Commits
Optimized the unlock mechanism in the Commit process for large data volume scenarios, enhancing overall system performance through asynchronous processing.
Optimized Concurrency Control for DDL Recovery Threads
Improved the concurrency management mechanism for DDL Recovery background threads to prevent resource overconsumption during massive DDL operation failures, thus enhancing overall system stability.
Optimized Concurrent Processing Capability for BulkLoad and Regular Transactions
Improved the concurrency control logic for BulkLoad transactions and regular transactions on different tables, allowing BulkLoad transactions to be executed concurrently with regular transactions on distinct tables, thereby enhancing system resource utilization efficiency.
Optimized On-Demand Transmission Handling for Large VARCHAR Fields in Exchange.
Optimized the transmission of large VARCHAR fields in Exchange by sending data only on demand.
Enhanced Capabilities for MC HTTP V2 API
Supported specifying the maximum request and response size for HTTP V2 API to enhance interface stability.

Syntax and Features

Support enabling or disabling Range Cache at the session level through Hint syntax.
Added SQL Hint, allowing users to control the use of Range Cache for individual query statements. Using /*+no_range_cache*/ forcibly disables Range Cache for the current query; using /*+use_range_cache*/ forcibly enables it.
Enhanced Flashback Query Feature
Provided an API to obtain transaction timestamps, supporting data queries using specific timestamps in Flashback Query.
Optimized Error Messages for Concurrent Creation of Tables with the Same Name
Improved the error message handling for concurrent creation of tables with the same name, providing clearer conflict notifications and unifying the table existence verification logic for both DDL and DML operations.
Enhanced Display of Execution Plan Pushdown Information
Optimized the EXPLAIN output by adding explicit pushdown execution indicators, enhancing the readability of SQL execution plans and debugging efficiency.
Added Backup Feature for MC Configuration Files
Shark automatically backs up configuration files when rendering MC configurations, facilitating subsequent parameter change comparison and problem troubleshooting.
Added Support for the Savepoint Feature
Support for the SAVEPOINT feature is provided, offering more flexible transaction control capabilities and facilitating data handling in complex business scenarios.
Extended the BATCH LIMIT Syntax Feature
Enhanced support for partitioned tables in the BATCH LIMIT syntax, improving the flexibility and applicability of the querying feature.

Stability

Fixed the issue where specific persistent variables failed validation during process startup.
Fixed an issue where the SQLEngine process might fail during startup because it read and validated the persistent variable dd_history_record_cleaner_interval_second. The validation logic has been optimized to ensure safe handling during the startup phase when related background threads are not yet initialized, improving the process startup success rate.
Fixed the sporadic crash (Core) issue caused by defects in the underlying lock mechanism during DML/DDL operations.
Fixed a Use-After-Free (UAF) vulnerability caused by lock lifecycle management issues in the underlying BRPC library during stress testing scenarios such as Range Tree.
Enhanced Validation and Fault Tolerance Capabilities for Flush Table and Write Fence Broadcast Operations
Enhanced the execution logic for broadcasting FLUSH TABLE and Write Fence operations in the SQLEngine module. Added a local cache refresh mechanism for Schema version mismatches and strengthened error handling for flush_cache_table operations during abnormal situations such as RPC stalls, improving the robustness of DDL-related operations.
Fixed the process suicide issue caused by reading empty temporary table information during DDL Job rollback.
Fixed a defect in the SQLEngine where DDL Jobs during rollback attempted to acquire locks for "temporary tables" with an empty string as the database name. This led to invalid RPC requests to MC and eventually triggered process KillSelf. The fix ensures stability in DDL rollback procedures.
Fixed the crash (Core) issue that may occur during the initial startup phase due to the plugin system not being fully initialized.
Fixed an issue where SQLEngine might crash during specific startup sequences due to external requests accessing the uninitialized global plugin variable (global_system_variables.table_plugin) prematurely, causing null pointer access. This enhancement improves the robustness of the service startup process.
Optimized RPC Connection Timeout Parameter Configuration
Fixed the logical adaptation issue between the tdsql_rpc_connect_timeout parameter and persistent variables to ensure parameter modifications take effect.
Optimized Statistics RPC Retry Mechanism
Refined the retry logic for statistics-related RPCs to avoid prolonged occupation of system resources, ensuring the stability of operation execution.
Optimized the Stability of Inplace DDL Operations
Comprehensive improvements for Inplace DDL operations, including: optimized RENAME COLUMN to eliminate the write-blocking phase in non-LogService scenarios; enhanced atomicity for batch Rename operations; refined log output for start_ddl_job; fixed state management issues during DDL rollback to ensure reliability and consistency of DDL operations.
Resolved the issue of abnormal CPU resource occupation.
Refined the statistics-related RPC mechanism and added a timestamp mechanism to prevent redundant computations from causing excessive CPU consumption.
Optimized the Transaction Processing Mechanism for Vanished RG
Improved the reverse liveness check logic for Vanished RG transactions to ensure proper handling during RG state changes, enhancing system reliability.
Optimized the State Awareness Mechanism between MC and TDStore
Enhanced MC's awareness of TDStore operational status to ensure tasks are dispatched only after TDStore is fully initialized, improving operation success rates.
Optimized Concurrent Processing for Large Transactions and RG Merging
Optimized concurrency control for large transactions and RG merging operations to ensure the correctness and consistency of transaction processing in complex scenarios.
Optimized Master-Standby Failover Switching for High Availability
Improved the master-standby failover switching process to enhance the system's automatic recovery capability and service continuity during failure scenarios.
Optimized the Background Thread Exit Mechanism for DDL Recovery
Improved the management of DDL Recovery background threads to ensure timely response to exit signals, enhancing system maintenance efficiency.
Fixed DROP VIEW operation connection loss issue.
Fix the connection loss exception that occurs when jmysql executes DROP VIEW, ensuring the stability of operation execution.
Fixed the Clone Instance Specification Mismatch Issue.
Fixed the issue where data objects could not be created due to RG specification mismatches when multi-RG instances are cloned as single-RG instances.
Fixed the InstallSnapshot Persistent Failure Issue.
Fixed false positives caused by InstallSnapshot monitoring collection errors.
Merge RG: Removed the Verification of leader and follower Reporting Consistency for index Positions.
Remove redundant position consistency detection to enhance the stability of Merge RG.

Restoring via Backups

Enhanced the retry capability for SST file upload failures during full backup.
Optimized the fault tolerance of the full backup process. When errors occur during the upload of SST files to object storage (COS) due to network fluctuations or partial failures in the COS SDK multipart upload, a new retry mechanism has been added for this scenario, enhancing the overall success rate of backup tasks.
Optimized the Snapshot Backup and Recovery Process
Improved the replay logic for snapshot backup and recovery tasks, enhancing error handling and retry capabilities.
Enhanced the incremental backup status monitoring capability.
Supports querying incremental backup status via MySQL Status, enhancing the visibility and operational convenience of backup tasks.

Data Migration

Optimized the handling of node eviction during cloning.
Improved the handling logic for node eviction during cloning to ensure data recovery can be properly completed under abnormal node conditions.

Operations

Enhanced bstack_fast diagnostic tool, supports outputting raw call stacks by thread ID.
Added a non-aggregated output mode for the fast thread stack aggregation tool bstack_fast. In this new mode, the tool outputs raw call stacks for each thread and associates them with bthread_id or pthread_id. This feature enables Ops personnel to pinpoint execution paths of specific threads when they are troubleshooting complex concurrency issues.
Fixed the archiving logic defect of tasks (Job) in the Management Console (MC) to prevent memory leaks and OOM.
Fixed a logical error in the Management Console (MC) when completed tasks are archived: When tasks in the "migrated" state existed in the task list, the archiving process would incorrectly terminate, causing all subsequent completed tasks to fail archiving. This resulted in a large number of tasks accumulating in memory, ultimately leading to MC OOM. After the fix, the archiving logic now correctly determines task states, ensuring timely memory release.
Unify all time durations in the slow query log (Slow Log) to seconds and fix the display format.
Fixed the issue where the execution time (such as Query_time) in the slow query log was displayed as 0 on the frontend interface after upgrade. Additionally, standardized the format of the kernel slow query log: unified the units of Query_time, Lock_time, and all other time durations to seconds (s), and standardized the precision to 6 decimal places, making it consistent with the MySQL standard format, to facilitate parsing by the log center, DBBrain, and other monitoring tools, eliminating ambiguity.
Improved bstack_fast diagnostic tool's unwinding algorithm, using libunwind to enhance stack unwinding integrity.
Replaced the stack unwinding method of the bstack_fast tool from the original frame pointer (FP) unwinding to the libunwind library. The new approach correctly unwinds functions that do not save frame pointers (such as usleep, nanosleep in the libc library), resolving the issue of incomplete or skipped call stack information when encountering such functions with the original method. This enhancement ensures the diagnostic tool outputs call stacks that are more precise and closer to the results of gdb bt or the native bstack command, thereby improving online issue troubleshooting efficiency.
Enhanced the automatic diagnostic capability for stuck background threads
Optimized the system diagnostic feature: when the background monitoring Thread detects that all worker threads (Worker Thread) are in a stuck state, it automatically triggers the fast stack dump (Fast Dump Stack) mode. This mode exports call stack information for all pthreads and bthreads, providing critical on-site data for subsequent analysis of severe system hang issues.
Optimized the replica distribution policy for multi-AZ instances during startup
Fixed the issue of uneven Primary RG replica distribution during multi-AZ instance startup, ensuring balanced data distribution and enhancing system stability.
Promote RPC version upgrade
When upgrading to version 21.2.3, synchronously update the RPC version to ensure feature compatibility.
Optimized the log output for distributed lock acquisition failures
Improved the log messages for MC distributed lock acquisition failures, providing more explicit warning messages.
Ensure persistence of Agent's personalized configuration
Optimized the Agent configuration management mechanism to ensure that customized parameters are not overwritten by default values during restart and upgrade processes.
Optimized the length limitation for Bvar status names
Removed the display of Bvar from SHOW GLOBAL STATUS and provided the in-memory table BVAR_INFO to display all Bvar values.
Enhanced the exception handling capability for Pod IP address allocation
Improved the probing mechanism for Pod IP address allocation failure scenarios to ensure the configuration change process remains unaffected.
Optimized MC Cntl RPC Timeout Configuration
Supports configuring the MC control RPC timeout via the tdsql_mc_meta_rpc_timeout parameter.
Optimized the log output level for transaction retries
Adjusted the output level of transaction retry logs to facilitate troubleshooting in daily Ops.
Enhanced the details of time consumption statistics for audit logs
Improved the segmented time consumption statistics in audit logs to provide finer-grained data for performance analysis.
Enhanced TDStore Slow Query Log Analysis Capability
Added granular slow query logs at both the transaction and RPC levels to provide more detailed performance analysis information, facilitating quick identification and resolution of slow query issues.
Optimized SQL Engine Upgrade Rollback Mechanism
Decoupled the update of system tables from the node restart process during upgrades. After all nodes restart, implement idempotent system table changes via dbms_admin.upgrade() to enhance upgrade safety and reliability.
Optimized Default Configuration of Binlog Parameters
Adjusted the default values of Binlog-related parameters to prevent confusion for users familiar with MySQL, improving user experience.
Enhanced KILL Command Error Feedback
Optimized the error message feedback when KILL command execution fails to provide clearer error prompts.
Optimized Persist Variable Setting Logic
Improved the setting and management mechanism of Persist variables, enhanced MC's support capability for global variable configuration files, and improved the flexibility of system configuration management. Added syntax SET PERSIST node_type x=x1; where node_type is one of hyper, storage, engine, cdc, columnar, log_only; variable x will only be synchronized to nodes of the corresponding type. SET PERSIST x=x1; is converted to SET PERSIST node_type x=x1 based on the node type of the node executing the SET PERSIST command and executed, here node_type represents the node type of that node.
Enhanced Transaction Lock Wait Slow Log Feature
Added slow query logs for transaction lock waits in TDStore, with dynamic control via a switch to facilitate issue identification and analysis.
Optimized Schema Status Display
Improved the SHOW CREATE TABLE command to correctly display all relevant Schema status information.
Unified RPC Packet Size Configuration Parameters
Deprecate the redundant tdstore_rpc_max_body_size parameter and unify to use tdsql_max_rpc_body_size to simplify configuration management.
Enhanced Kernel Monitoring Metrics Collection Capability
Enhanced the kernel monitoring metrics collection capability to provide richer system operational status monitoring data, facilitating system performance analysis and issue identification.
Enhanced Kernel Direct Sampling Metrics Timestamp Support
Improved the kernel direct sampling metrics API by adding timestamp information after the Value in Prometheus protocol metrics, facilitating time-series data analysis and monitoring alarm configuration.
Optimized Table Opening and Query Execution Logic
Improved the table opening and query execution process to avoid unnecessary correlated table operations, optimizing memory usage efficiency.
Optimized BulkLoad Parameter Configuration Process
Simplified the RocksDB parameter configuration involved in BulkLoad operations by providing low, medium, and high resource presets, reducing Ops efforts and improving import efficiency.
Provided a Quick Modification Tool for MC Global Variables
Developed a convenient tool to support quick modification of MC global variable information, enhancing Ops efficiency.
Optimized Unix Socket Lock File Settings
Improved the Unix Socket lock file settings logic to prevent startup failures caused by process conflicts.
Established TDStore Data File Format Upgrade Compatibility Specifications
Developed TDStore data file format upgrade compatibility specifications to ensure safe rollback capability during the upgrade process, enhancing system maintenance reliability.
Productized the Data Object Location Distribution Capability
Integrated foundational features related to data object location distribution to form a systematic product solution. Enhanced MC's scheduling capability for first-level time Range partitions, improving data management and scheduling capacities. Supports creating Distribution Policies via SQL statements, providing more flexible data distribution management methods.
Enabled RocksDB Monitoring Metrics
Enabled RocksDB monitoring metrics for range query counts, enhancing the system monitoring framework.
Fixed System Table Creation Issues in the 21.2 Version Upgrade Process
Improved the system table creation logic during the upgrade process to ensure safe rollback of DDL operations related to system tables in case of upgrade failures.

Bug fixes

Fixed an issue where, during the initialization phase of batch key-value access (Batch Access Key), the system failed to exit immediately upon receiving a termination (killed) signal, instead waiting until the next checkpoint. By advancing checkpoints at critical positions, queries can now respond more promptly to termination operations.
Reduced the timer precision requirement to optimize performance and fixed the issue of slow initialization on some machines.
Fixed a Crash issue in the MRR (Multi-Range Read) path triggered by specific complex nested queries during Fuzz Testing, involving combinations of clauses such as NOT EXISTS, UNION ALL, GROUP BY, and HAVING.
Fixed the performance issue of equality queries using the ref access method on partitioned tables. The root cause was that after fetching each record, the system had to perform the reset_parallel_scan_exec_flags operation for all partitions, which incurred overhead proportional to the number of partitions.
Fixed an issue where, when complex queries involving partitioned tables are executed using Multi-Range Read (MRR), the ha_rockspart partition table handler's ref_length_actual field failed to properly synchronize with the underlying storage engine's value. This resulted in the use of uninitialized abnormal values in subsequent calculations, causing memory access violations and process crashes (Core).
Fixed the issue where, when queries are executed on extra-large partitioned tables (such as 1,000 partitions), the optimizer had to scan each partition for number of rows estimation during range row estimation (records_in_range), resulting in high latency.
Fixed an issue where the transaction memory limit parameter max_txn_size did not match the expected value under specific configurations. Confirmed that the issue stems from inconsistencies in parameter parsing or passing, causing the transaction memory upper limit to be incorrectly set to the default value (approximately 1GB) instead of the configured value.
Fixed an issue where disaster recovery instances upgraded from versions prior to 21.2.0 encountered errors when synchronizing database user (DB User) permissions. The root cause was that the newly added DB User synchronization feature defaults to using a new account (tdsql3_sys_standby_xxx), while older disaster recovery instances used the legacy account (tdsql3_sys_standby), resulting in permission grant failures.
Fixed an issue where the BatchGetV3 API might return a part_ctx_version mismatch error under specific conditions. This problem was caused by redundant logic introduced during the development of the multi-scan feature, which could incorrectly route transactional read requests to the snapshot read path.
Fixed a logical error where, when the average number of records per key part (Keypart) (rec_per_key) for a Compound Index is estimated, "subsequent key parts might have a larger rec_per_key value than prefix key parts." The root cause was that rec_per_key values for different key parts were independently estimated by different algorithms without mutual constraints between them.
Fixed an issue where ANALYZE TABLE failed to properly update table row statistics (table_rows) when column histograms (Histogram) existed. The root cause was that in specific code paths, the number of rows estimated by histograms incorrectly overrode the number of rows obtained through exact scans during ANALYZE TABLE.
Fixed an issue where stale column histograms (Histogram) could affect newly generated index statistics after data updates. This resulted in incorrect rec_per_key and cardinality values for compound indexes even after ANALYZE TABLE is executed, thereby impacting the optimizer's row estimation.
Fixed an issue with the logic for accumulating the number of distinct values in prefix keys (total_prev_ndv) when column histograms (Histogram) are used to estimate the selectivity of compound index prefixes. This error caused the rec_per_key calculation for subsequent key parts to use an incorrectly amplified number of distinct prefix values, resulting in inaccurate estimations.
Fixed an issue where sending a kill -19 signal to the main MC process in specific hybrid node failure scenarios could cause I/O to persistently drop to zero.
Fixed an issue where the MC module could get stuck and fail to stop properly during exit and master switchover processes due to a potential deadlock in the GetRepGroupsByTs function.
Fixed an issue where abnormal binlog generation was encountered when update statements containing subqueries in the where condition are executed.
Fixed an issue where the recovery process would get stuck when encountering jobs with PassiveAbortStage during snapshot restoration, as these jobs did not require reporting.
Fixed the issue where the TDStore client lost log receiver information when executing get_raft_node_info.
Fixed the issue where transactions were wrongfully aborted during merge operations after an RG leader switchover.
Fixed the issue of lingering log-receivers and optimized the exit process to ensure proper cleanup of residual log-receivers even when log-service is absent.
Fixed an issue where the virtual table TDSTORE_INSTALL_SNAPSHOT_INFO encountered parsing errors after upgrade due to changes in storage format.
Fixed an issue where expanded RGs unintentionally persisted merge log positions during LogService replay.
Fixed an issue in version 21.2.3 where the cluster scaling-down process from a 3-replica configuration to a 2+1 configuration would stall. This occurred because the migration condition check failed to specially handle log replicas (which should have a size of 0).
If LogService has not completed initialization, querying this view will return directly.
Fixed an issue in the 1PC commit_ts optimization logic that could cause a transaction's commit_ts to be less than safe_read_ts.
Fixed the confusion issue related to the db_pending_compaction_bytes_limit monitoring metric.
Fixed an issue where the LogService manager component panicked due to starting after the controller.
Fixed an issue where executing TRUNCATE PARTITION failed when the table contained 64 Keys.
Fixed the issue of incorrect size calculation logic for slow query logs.
Before the log receiver established a connection to the leader, it received an install snapshot. Consequently, start_index remained uninitialized and defaulted to 2, which led to miscalculation of start_index.
Fixed the issue where the tdstore_compact_on_delete_ratio parameter was incorrectly set to 0 during the refactoring in version 21.0.0.
Fixed an issue where checkpoints generated during CDC replay of Raft Log caused index rollback.
Fixed the issue where auto commit transactions caused inaccurate statistics in distributed transactions.
Fixed the issue where the process gets stuck when the task to modify the primary AZ is initiated during full backup.
Fixed an issue where the Raft Leader would still Purge Raft Log when multiple nodes are scheduled for Log Receiver objects.
Fixed the issue where the Item_func_group_concat function executed with exceptions in row-based MPP mode.
Fixed the issue where the ImproveLocation feature erroneously migrated replication groups (RGs) that failed to be created.
Fixed an issue where the size of RPC packets was not validated when BatchPut was used with regular secondary indexes.
Fixed a core issue caused by the ha_rocksdb::check_index_dup_key function in the self-test environment.
Fixed an issue where offline tasks generated by replica reduction operations failed to properly restore replica status during the abort process.
Fixed the issue where temporary associated table definitions were left lingering after a failed online copy operation.
Fixed the issue where lingering stage information was not cleaned up when DDL tasks failed.
Fixed the issue where manual partition redistribution failed after instance scale-in due to forced split not being set.
Fixed the issue where whole-table statistics were not updated synchronously when partitioned tables were updated.
Fixed an issue where partitioned tables did not correctly invoke ha_rocksdb::records_from_index when executing count(*) through secondary indexes.
Fixed a deadlock issue that could occur when indexes are added to partitioned tables with explicit partitioning policies.
Fixed the issue of a brief leaderless state occurring after the MC leader node was terminated.
Fixed the issue where Update statements incorrectly triggered AutoInc RPC on non-auto-increment columns of non-hidden primary key tables.
Fixed the issue where SQL statements timed out during pre-checks when disaster recovery instances are created.
Fixed the issue where CVM specifications being too small caused bthread_timer_threads parameter validation to fail, resulting in Pod crashes.

Syntax Change

Change Type
Syntax
Note
Addition
SELECT CURRENT_GLOBAL_TIMESTAMP(); — obtain the current global timestamp (GTS).
SELECT FROM_UNIXTIME(CURRENT_GLOBAL_TIMESTAMP() >> 24); — returns the physical time part of GTS.
Modification
Automatically converts to the corresponding SET PERSIST node_type x = x1 format based on the current node type, and takes effect only for nodes of the same type.
Addition
Support for the SAVEPOINT feature, with the specific syntax as follows:
1. Create a SAVEPOINT within a transaction: SAVEPOINT sp_name
2. Rollback to a SAVEPOINT: ROLLBACK TO SAVEPOINT sp_name
3. Release a SAVEPOINT: RELEASE SAVEPOINT sp_name
Modification
Added support for partitioned tables, with a new limitation: The BATCH LIMIT feature is disabled when the WHERE clause contains GROUP BY or LIMIT.

Data Dictionary Change

Change Type
Data Dictionary
Note
Addition
Used to display all bvar statistics monitoring items in the TDSQL Boundless system.
Addition
Quickly check whether there are tables in the current instance that are not supported by the disaster recovery feature.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백