V18.2.1
Version Release Notes
This version fixes the issue where data may fail to be deleted after TRUNCATE TABLE.
TRUNCATE TABLE or DROP TABLE may still occupy space when data is cleared. This occurs due to issues with some leftover Regions, which can cause these Regions and their corresponding data to be permanently retained.
This version fixes the aforementioned issues and provides an HTTP API for deleting redundant Regions:
/meta-cluster/api/v2/job/delete-redundant-regions/{head.cluster_id}
Bug fixes
Fixed the issue where deletion tasks were not issued for some leftover Regions after TRUNCATE TABLE or DROP TABLE.
Fixed the issue where space occupation persists after TRUNCATE TABLE clears all data.
V18.2.0
Version Release Notes
Operations
3AZ instance supports primary AZ
When an instance is created, if the disaster recovery mode is 3-replica Raft (3 AZs), users can choose to set the primary AZ.
If a primary AZ is set, the RG Leader will switch to the node in the primary AZ to reduce distributed transactions and improve performance. During the primary AZ activation period, capabilities such as hotspot scheduling that may cause RG Leader switching will be disabled to ensure the RG Leader remains in the primary AZ. This configuration applies to most business scenarios where primary replicas reside within a single AZ, avoiding cross-AZ transactions and reducing transaction response time.
If no primary AZ is set, data is evenly distributed across all AZs. This configuration is suitable for scenarios with high write volumes and only single-machine transactions, allowing full utilization of node resources in all AZs.
INFORMATION_SCHEMA adds MC master switchover history view
Added INFORMATION_SCHEMA.META_CLUSTER_LEADER_HISTORY to display MC Leader switchover history. For details, see META_CLUSTER_LEADER_HISTORY. Bug fixes
Fixed the issue where capacity balancing occurred on empty instances after table creation but before data import, disrupting the planned RG distribution established during table creation.
Fixed the issue where hotspot scheduling used different dimensional statistics for node information and RG information, causing unnecessary scheduling. Node information uses the weighted average over the past minute, while RGs adopt the instantaneous value from the last heartbeat.