V20.0.2
Bug Fixes
Fixed the issue where node crashes during CREATE TABLE accidentally deleted tables with the same name.
V20.0.1
Version Release Notes
Syntax and feature
Introduction of the INDEX_FOR_GROUPBY and NO_INDEX_FOR_GROUPBY hints
INDEX_FOR_GROUPBY and NO_INDEX_FOR_GROUPBY provide hints to control the adoption of the Loose Index Scan (loose index scan) method during GROUP BY operations, reducing scan overhead. For details, see INDEX_FOR_GROUPBY.Stability
SQLEngine supports setting parameters to limit the maximum size of a single transaction.
Provides the parameter tdsql_max_memory_per_transaction_bytes to limit the write volume of a single transaction, supporting manual configuration by users to prevent large transactions.
Bug Fixes
Fixed the issue where Binlog Dump consumed memory exceeding the configured binlog_dump_cache_max_size, temporarily writing the cache to files to alleviate memory pressure.
Fixed an issue where after a CDC node is restarted, if the last Binlog file was empty, it might cause duplicate transaction writes to the Binlog.
Fixed an issue where MC overlooked checking the reject-leader <Tag> when actively triggering RG master switchover tasks in certain scenarios.
Fixed an issue where MC failed to consider instance concurrency limits during Leader position restoration after upgrades, resulting in RG master switchover task dispatch failures.
Fixed the issue where TDStore controlling Raft Cache memory at the granularity of individual Raft Nodes could cause uncontrolled memory consumption when a single Node contained excessive Raft Nodes. This has now been changed to unified global control.
Fixed the issue where during version rolling upgrades, when the coordinator receives a response from a lower version participant, the coor_term field is not correctly set, leading to handling exceptions. Now, via rpc_version, it identifies this scenario and performs compatibility handling.
Fixed an issue where MC, in scenarios involving overlapping HyperNode read-only and migration tasks, caused tasks to persist in an incomplete state due to incorrect status settings, thereby blocking subsequent task scheduling for the RG.
Fixed the calculation logic for the tolerance coefficient in Space Ratio mode, which no longer multiplies by the average available disk space.
V20.0.0
Version Release Notes
Scalability and Performance
LogService supports UK table synchronization.
LogService's MysqlClient mode supports synchronizing tables with Unique Keys (UK).
Optimize the execution performance of large partitioned tables
Optimized key functions in the partitioned table logic to enhance the execution performance of large partitioned tables.
Optimize the number of rows estimation within ranges by the Optimizer
Designed a new Range Cache to store SST Block Statistics, enabling the Optimizer to more accurately estimate row counts within query ranges using these Cached Statistics. Added the Session variable enable_range_cache as a feature switch.
Convert all indexed joins to BKA joins using Batch RPC.
At the execution layer, BKA joins are converted to sending Batched RPC. This feature can be enabled by turning on the force_batched_key_access switch in optimizer_switch, which will significantly improve the performance of indexed joins.
BatchGet/BatchCheck/BatchPut aggregation by RG
Increase the aggregation granularity to further reduce the number of RPCs and optimize performance.
Supporting Multiple Deadlock Rollback Priorities.
InnoDB prioritizes rolling back transactions with less data written when selecting deadlock victims. Previously, TDStore's behavior differed from InnoDB by selecting the later-started transaction as the deadlock victim. In this version, TDStore introduces a new system variable tdstore_deadlock_victim to control the selection of deadlock victims, defaulting to align with InnoDB's behavior.
Supporting Explicit Affinity Binding.
Support explicit creation/binding/unbinding of affinity policies via SQL. Tables bound to the same affinity policy will have their corresponding partitions (for partitioned tables) or entire tables (for non-partitioned tables) scheduled to the same RG (dependent on Merge; requires merge-rep-group-enabled set to 1). Affinity relationships are strictly maintained during RG splits. Currently, only first-level HASH partitions and non-partitioned tables are supported. For details, see CREATE PARTITION POLICY, DROP PARTITION POLICY, CREATE TABLE, and ALTER TABLE. Supporting DML Query Forwarding.
Forwarding DML queries to the node where table data resides reduces RPCs for data transmission between compute and storage nodes when the SQL access node differs from the data node. As an experimental feature, it currently only supports autocommit DML, does not support Prepared Statements or Multi-Statement in One Query. The GLOBAL parameter tdsql_enable_proxy controls this feature and is disabled by default.
Supporting Multiple Sub-Plans in Parallel Query Planning.
Support specifying multiple tables for parallel scanning via the PARALLEL Hint within a Query Block. The query optimizer will split the entire query plan into multiple sub-plans to execute in parallel.
Supports converting Raft storage from Multi-Raft-DB to Segment.
When an online instance has nodes with Raft storage as Multi-Raft-DB, configure two parameters: set raft_log_storage_type to 1 (indicating Segment usage) and tdstore_enable_raft_log_convert_to_segment_storage to true. This will automatically convert Multi-Raft-DB to Segment upon restart. Post-conversion, Multi-Raft-DB is backed up in a multi-raft-db-bak directory, which requires manual deletion later.
Supporting Configuration of default_collation_for_utf8mb4 Values via Settings.
Support configuring the default Collation for the utf8mb4 character set via the default_collation_for_utf8mb4 parameter in template.cnf. The default setting is default_collation_for_utf8mb4 = utf8mb4_general_ci.
Modifies the behavior of the CONVERT function. When the target character set for CONVERT is utf8mb4 and no Collation is explicitly specified, the target Collation uses the value of the default_collation_for_utf8mb4 variable rather than utf8mb4_0900_ai_ci (MySQL's default behavior).
Supports modifying the value of default_collation_for_utf8mb4 to options other than utf8mb4_general_ci and utf8mb4_0900_ai_ci.
Supporting First-Level Key Partitioning Binding Partition Policy.
For first-level Key partitioning, it supports binding the same Partition Policy to partitioned tables with identical partitioning methods (same number of partitions and partition keys).
Optimized heartbeat handling and hotspot statistics method for MC towards TDStore.
Eliminated the object pool used in heartbeat processing to reduce redundant data copying.
Merged the coroutines for node hotspot information statistics and global hotspot table object statistics to reduce CPU resource consumption.
Merge secondary indexes not in the same RG as the primary key into the RG where the primary key resides.
When an index is added, a secondary index may not reside in the same RG as the primary key (as the primary key's RG might be executing other tasks). It must later be merged into the primary key's RG through scheduling.
Support switching internal transaction timestamping to TDStore.
When there is only a single RG or in cross-data-center deployments, using TDStore for transaction timestamping can reduce latency, with particularly significant improvement for point queries.
Supporting RG merging without terminating transactions.
During the process of merging a Vanished RG into an Expanded RG, transactions within the Vanished RG are migrated to the Expanded RG for execution. This may cause brief transaction latency, but transactions are not actively terminated (unlike the original implementation, which terminated all transactions in the Vanished RG).
Stability
Leader distribution snapshot and restore before and after instance rolling upgrades
After instance rolling upgrades, the pre-upgrade RG Leader distribution can be restored to ensure database performance stability.
Quickly release residual participant context.
In previous editions, if the SQLEngine crashed, its residual transaction participants across TDStore nodes typically took several minutes to be released (relying on a separate thread's periodic detection mechanism). The pessimistic locks held by these participants could block read/write operations of other transactions, impacting availability. In V20.0.0, TDStore introduced a node crash detection feature (enabled or disabled via the configuration item tdstore_enable_node_alive_detection). With crash detection enabled, SQLEngine failures can be detected within 2s, allowing its residual participants to be quickly released.
Database management
Disaster Recovery Feature Optimization
Added disaster recovery relationship validation: If an instance has an active disaster recovery relationship, the prompt "Please terminate the disaster recovery relationship before initiating termination and refund" will be displayed.
Support purchasing/establishing disaster recovery links for existing instances (running for over 5 days) through backup and recovery mode.
Disaster recovery switchover/disconnection operation restriction: Can only be performed when both primary and secondary instances are in the Running state.
Support for forced disaster recovery switchover. Forced switchover is a high-risk operation and is not recommended when the primary instance is in normal running state.
Support for intra-region and cross-region cloning to establish disaster recovery, enhancing the system's resilience against major disasters. Through cross-region cloning, data is replicated to disaster recovery centers in different geographic locations, effectively mitigating threats to data from extreme scenarios such as natural disasters and regional network failures.
Support for Shared Object Lock.
Prior to version V20.0.0, only Node-level mutual exclusion Object Lock was supported. After the V20.0.0 upgrade, session-level mutual exclusion/shared Object Lock is now supported.
Online DDL Acceleration
In daily Ops, frequent adjustments to table structures—such as adding columns or indexes—are often required. However, when dealing with large datasets, adding indexes or modifying columns can be time-consuming, hindering business development. Starting with version V20.0.0, TDSQL extends FastOnlineDDL support scenarios, accelerating online DDL operations by an order of magnitude.
In V19.0.0, Fast Online DDL only supported adding indexes to non-partitioned tables. V20.0.0 extends support to adding indexes (including unique indexes) and adding/dropping columns for non-partitioned tables, while partitioned tables also gain support for adding/dropping columns.
SET SESSION tdsql_ddl_fillback_mode = 'IngestBehind';
CREATE TABLE sbtest1 (a INT AUTO_INCREMENT PRIMARY KEY, b INT, c INT) PARTITION BY HASH (a) PARTITIONS 3;
INSERT INTO sbtest1 VALUES(1,1,1),(2,2,2),(3,3,3);
ALTER TABLE sbtest1 ADD INDEX idx_b(b);
FastOnlineDDL Usage Instructions
call dbms_admin.set_schedule_config("merge-rep-group-enabled","1");
Security Enhancement
None.
Data Migration
Binlog Dump Adaptation for Community Binlog Feature
Supports generating Binlog for the DDL syntax CREATE TABLE tb2 AS SELECT xxx FROM tb1.
Supports the RESET MASTER and FLUSH LOGS commands.
After Binlog Dump is enabled, the log_bin_trust_function_creators parameter is automatically enabled.
MyDumper Supports Multi-threaded Consistent Dumps.
When MyDumper is used for multi-threaded exports, flashback queries are used to ensure multi-threaded backup consistency.
MyLoader Supports Recording Session Variables During Imports.
MyLoader records the Session variables from the source file and restores them after disconnection from the database, preventing data inconsistency caused by disconnections.
MyLoader Supports Recognizing TDSQL Error Codes.
MyLoader supports recognizing error codes of digital types such as 1082 to determine error types.
Restoring via Backups
Backup and Recovery Adaptation for FastOnlineDDL
Supports incremental backups of BulkLoad External SST files generated during FastOnlineDDL execution.
Supports restoring BulkLoad External SST files during incremental recovery.
Optimization of Purge Range for Computing Incremental Backup Logs in Backup and Recovery
Changed the Purge Range scanning method to a full scan to avoid false positives caused by Purge Range.
Operations
Supports CPU/Memory Hot Loading
Supports dynamically adjusting resource parameters such as CPU and Memory, and synchronously updating related parameters.
Returns a clear error message when the auto-increment value exceeds the limit.
When the auto-increment value exceeds the limit, an error message "auto_increment value exceeds max value" is returned clearly.
MC & MC-Agent Support Distinguishing Between Data Disks and Log Disks.
MC Pod now includes a dedicated log disk. Logs from mc-server, mc-agent, and TDBR will be written to this log disk, distinguished from data disks. This prevents frequent log refreshing from monopolizing disk space and affecting normal process execution.
ServiceLink supports binding specified Pods to routes.
Supports binding specified Pods to multiple LBs (VIPs) without dynamic adjustments during scaling operations.
Supports managing multiple ServiceLinks for the same instance, with each ServiceLink associated with an Eros network ID.
EngineAgent supports closed-loop automation of parameters.
EngineAgent's kernel side provides a parameter rendering script to support closed-loop automation of parameters, offering the engine-agent-conf-gen executable for performing parameter rendering.
Bug Fixes
Fixed an issue where the log_bin_trust_function_creators variable was not automatically enabled on CDC nodes after Binlog Dump was enabled.
Fixed an issue where the Agent collected Binlog Latency for all nodes. Binlog Latency is no longer collected for HyperNodes.
Fixed the issue where the monitoring metrics data_db_bytes_read and data_db_bytes_written displayed the y-axis units as "hundred billion" instead of "hundred billion bytes" after "adaptive units" is selected.
Fixed an issue where the modifyConfigFileAfterAddMember step sent HTTP requests to itself during MC startup.
Fixed an issue where validation was not performed during the pre-check phase when AZs are removed and nodes are simultaneously scaled in.
Fixed an issue where a low-probability Crash might occur during Recovery after a failed DROP TABLE operation.
Fixed an issue where reloading statistics for empty partitioned tables failed, resulting in statistics not being updated in a timely manner.
Fixed an issue where task archiving deleted caches before persistence. When timestamps cannot be obtained, tasks will not be archived on the current MC Leader.
Fixed an issue where the Agent might fail to perform Heap dumps on the main process.
Fixed an issue where MC did not check the Raft Index Gap between the RG Leader and Follower before sending the leader switch task.
Fixed potential concurrency issues in the recycle bin Flashback Table statements.
Fixed an issue where potential deadlock issues might occur during concurrent execution of UPDATE and DDL.
Fixed an issue where creating/switching to the disaster recovery kernel version fails after an upgrade to V19.2.0 is performed.
Fixed an issue where, when the MySQL Community Edition SERVER_VERSION is increased, the upgrade process updates system tables in the mysql schema, views in the sys schema, and Non-DD Based Views in the information_schema. However, when the TDSQL-defined SERVER_SUB_VERSION is increased, the upgrade process does not affect the updates of these internal tables, potentially resulting in the loss of internal tables defined in the new version.
Fixed an issue where Primary RG was created during instance upgrade.
Parameter Change
|
Modification | | The feature is to limit the maximum memory used by Binlog dump, in bytes. |
Addition | | Set to on or off to convert index-based connections into bka connections using batch RPC, reducing data volume and improving performance. |
Modification | | tdsql_auto_increment_batch_size is used to set the number of auto-increment values obtained by SQLEngine in a single operation. The sequence of auto-increment values is stored on storage nodes. In certain scenarios, to improve insertion performance, the system can allocate consecutive auto-increment values for multiple insertion operations at once, thereby reducing the overhead of the auto-increment generator. By adjusting the tdsql_auto_increment_batch_size parameter, you can control the size of this batch allocation.
|
Addition | | Controls whether to enable or disable the crash detection feature. |
Addition | | Determines which transaction to roll back when a deadlock occurs. When set to WRITE_LEAST, it prioritizes rolling back the transaction with the least amount of data written. When set to START_LATEST, it prioritizes rolling back more recently started transactions. |
Addition | | This parameter is used to set the maximum threshold for the time taken to proactively switch to the primary task (including the time for migrating transactions from the old leader and sending requests to the new leader). |
Addition | | This parameter is used to set the maximum threshold for the time taken by the Raft layer to switch leaders (including the time for new leader election and taking office), in milliseconds. |
Addition | | The disk throughput throttling parameter for Fast Online DDL ingest behind mode, used to limit the read/write throughput consumed during the data backfill phase. This ensures neither read nor write throughput exceeds this value, which may impact the execution efficiency of Fast Online DDL. Setting it to 0 disables throttling. |