tencent cloud

TDSQL Boundless

Release Notes
Product Introduction
Overview
Scenarios
Product Architecture
Instance Types
Compatibility Notes
Kernel Features
Kernel Overview
Kernel Version Release Notes
Functionality Features
Performance Features
Billing
Billing Overview
Purchase Method
Pricing Details
Renewal
Overdue Payments
Refund
Getting Started
Creating an Instance
Connect to Instances
User Guide
Data Migration
Data Subscription
Instance Management
Configuration Change
Parameter Configuration
Account Management
Security Group
Backup and Restoration
Database Auditing
Tag Management
Use Cases
Technical Evolution and Usage Practices of Online DDL
Lock Mechanism Analysis and Troubleshooting Practices
Data Intelligent Scheduling and Related Practices for Performance Optimization
TDSQL Boundless Selection Guide and Practical Tutorial
Developer Guide
Developer Guide (MySQL Compatibility Mode)
Developer Guide (HBase Compatibility Mode)
Performance Tuning
Performance Tuning Overview
SQL Tuning
DDL Tuning
Performance White Paper
Performance Overview
TPC-C Test
Sysbench Test
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Security Group APIs
Task APIs
Backup APIs
Rollback APIs
Parameter APIs
Database APIs
Data Types
Error Codes
General Reference
System Architecture
SQL Reference
Database Parameter Description
TPC-H benchmark data model reference
Error Code Information
Security and Compliance
FAQs
Agreements
Service Level Agreement
Terms of Service
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

FAQs

PDF
Mode fokus
Ukuran font
Terakhir diperbarui: 2026-03-06 18:50:08

What Database Protocols Is TDSQL Boundless Compatible With?

TDSQL Boundless is compatible with the MySQL 8.0 protocol. Users can treat it as a MySQL 8.0 instance, but certain operations are restricted. For details, see Usage Instructions.

Does TDSQL Boundless Require a ShardKey (ShardKey)?

TDSQL Boundless does not require defining shard keys, and its table creation syntax remains consistent with native MySQL. The sharding mechanism of TDSQL Boundless is based on MySQL's native partitioned tables. In most cases, first-level Hash partitioning is sufficient to meet requirements, distributing Hash partitions across all data nodes to evenly distribute write pressure.

What Are the Differences in Read/Write Performance Between the TDSQL Boundless Engine and Native MySQL?

TDSQL Boundless is a native distributed database. It maintains consistency among replicas of data units through the Raft protocol. By default, each data unit has three replicas, and these data are evenly distributed across all data nodes. This data distribution policy significantly enhances the efficiency of handling large volumes of write operations, particularly excelling in write-intensive and read-light business scenarios.
Compared to MySQL's InnoDB storage engine, TDSQL Boundless achieves a data compression ratio 3-9 times higher. This not only effectively reduces storage requirements but may also improve write I/O performance. Therefore, TDSQL Boundless is particularly suitable for scenarios with demanding write performance requirements.

Does TDSQL Boundless Provide the Capability for Read-Write Separation?

Read-write separation is often used to solve the following two problems:
1. In a traditional leader/follower architecture, read-write separation can fully leverage the resources of the standby database.
2. For applications with distinct analytical processing (AP) and transaction processing (TP) business scenarios, read-write separation can avoid mutual interference between them.
However, the architecture of TDSQL Boundless differs from traditional leader/follower architectures (such as InnoDB). In TDSQL Boundless, data is evenly distributed across all nodes, allowing full utilization of resources such as CPU and I/O on each node without the need for read-write separation to leverage standby database resources.
For scenarios that require isolating SQL for AP reading from TP scenarios, we will provide corresponding support in later versions.

Does TDSQL Boundless Support Read-Only Accounts and Read-Only Nodes?

TDSQL Boundless currently does not support read-only accounts or read-only nodes. In its peer-to-peer node architecture, read/write requests can be distributed across different nodes to fully utilize the resources of each node. For the product architecture of TDSQL Boundless, see Product Overview.
We will add support for read-only nodes in later versions. Stay tuned.

What Table Types Does TDSQL Boundless Support? Does It Include Single Tables, Broadcast Tables, and Partitioned Tables?

In TDSQL Boundless, regular tables are currently primarily supported, including both partitioned and non-partitioned types.
For partitioned tables, the data in different partitions can be distributed across different nodes.
For non-partitioned tables, if the data volume is large, the RG will be split and evenly distributed to each node.
TDSQL Boundless also supports broadcast tables: A table can be defined to be replicated across all nodes during table creation.

In TDSQL Boundless, Is It Necessary to Use Partitioned Tables?

In single-node architecture, partitioned tables are primarily used to enhance SQL performance through partition pruning and to periodically clean data by drop partition. In TDSQL Boundless distributed scenarios, the benefits of using partitioned tables also include leveraging the write capabilities of multiple nodes, which is especially important for processing big data.
When an enterprise is facing large-scale data migration, it is recommended to pre-convert large tables into hash-based partitioned tables. This approach leverages TDSQL Boundless' multi-node capabilities to accelerate the data import process.
If no pre-partitioning is done and a single table is created instead, all write operations will initially concentrate on one data node during data import, which can lead to I/O bottlenecks. TDSQL Boundless provides the feature of automatic splitting and data migration. However, if the table is not partitioned initially, this process may be slow, and replica balancing during splitting and migration will incur additional I/O overhead.
By creating partitioned tables, you can maximize the capabilities of the TDSQL Boundless distributed database. The cost of this adaptation is minimal, requiring only modifications to the table creation statement without any additional adjustments to the business code. For example, if your TDSQL Boundless instance contains 30 nodes, creating a first-level hash-partitioned table with 30 partitions will result in TDSQL Boundless creating a primary replica on each node, achieving replica balancing. Concurrently, incremental business data will be evenly distributed across all nodes, ensuring relatively balanced pressure on each node.
To fully leverage the distributed features of TDSQL Boundless and avoid potential performance bottlenecks, creating partitioned tables is a recommended best practice.

Does TDSQL Boundless Have Performance Issues in Read Scenarios? How to Determine the Locations of Data Shards?

In the TDSQL Boundless distributed database, read performance may be affected by data sharding and query patterns. The following are two common scenarios:
Queries with partition keys: If a query includes a partition key, TDSQL Boundless can directly route the query to the specific data shard containing that partition key. This approach is highly efficient as it avoids unnecessary data traversal and pinpoints the correct data node.
Queries without partition keys: When a query does not specify a partition key, TDSQL Boundless needs to determine the data location through secondary indexes. In this scenario, the system performs scan queries on nodes containing the data of the table, which may result in slight performance degradation as it requires examining more data.

What Is the Maximum Supported Capacity of TDSQL Boundless?

The maximum supported capacity of TDSQL Boundless is virtually unlimited. As business demands grow, you can expand the database capacity by adding more nodes to accommodate increasing data storage and processing requirements. Currently, TDSQL Boundless instances containing dozens of nodes have been deployed on public cloud platforms.
TDSQL Boundless also provides a visual interface for convenient horizontal scaling out and scaling in. Additionally, it features built-in automatic data relocation and capacity balancing, which automatically adjusts data distribution among nodes to ensure optimal system performance and storage efficiency without manual intervention.

Whether the Query Performance of LSM-Tree-Based TDSQL Boundless Is Lower Than That of Native MySQL

Compared with MySQL's B+ tree index, LSM-tree (Log-Structured Merge-tree) has a significant advantage in write performance, possibly at the cost of compromising some read performance.
However, as a distributed database, TDSQL Boundless provides multiple mechanisms to enhance query performance:
1. Vertical/Horizontal Scaling: TDSQL Boundless can scale database processing capacity by adding more nodes, thereby increasing Queries Per Second (QPS). This is unachievable with single-server MySQL, as its performance is constrained by the hardware resources of a single server.
2. Optimization strategies: Even on a single node, TDSQL Boundless employs a series of optimization strategies to enhance read performance:
Leveling Compaction: TDSQL Boundless stores all data (including primary keys and indexes) in a large, ordered key-value space, which corresponds to multiple SST files on physical disks organized into seven levels (L0 to L6). The Leveling Compaction strategy ensures key uniqueness within each level except L0, which accelerates query performance. The L0 level is unique in allowing range overlaps between files, but TDSQL Boundless restricts the number of L0 files, typically to no more than four. When it accesses data, TDSQL Boundless first checks the in-memory memtable. If the data is not found, it sequentially examines SST files on disk level by level. Since keys are unique from L1 to L6, only one SST file per level needs to be checked to determine the presence of target data.
Bloom Filter: When searching for data, TDSQL Boundless uses Bloom filters to quickly filter out SST files that cannot contain the target key, thereby avoiding unnecessary disk lookups and conserving resources.
Block Cache: TDSQL Boundless leverages block caching to store hot data, reducing disk I/O operations and further improving read performance.
In summary, although LSM-tree-based TDSQL Boundless may not match optimized MySQL instances in read performance, its distributed architecture and optimization measures still enable it to deliver efficient read and write performance, particularly in scenarios involving large-scale data processing and high-concurrency access.

Do InnoDB and TDSQL Boundless Exhibit the Same Behavior Regarding Primary/Secondary Latency in Large Transactions?

InnoDB and TDSQL Boundless exhibit distinct characteristics in handling primary/standby latency issues caused by large transactions:
InnoDB: InnoDB uses binary logs (binlogs) to synchronize primary/standby data. In high-concurrency and large-data-volume scenarios, large transactions may cause primary/standby latency due to the time-consuming replication and replay process of binlogs.
TDSQL Boundless: As a distributed database based on the Raft protocol, TDSQL Boundless synchronizes data between nodes in real time through Raft logs. In the Raft protocol, after receiving a client request, the Leader node first adds the request as a new log entry to its local log and then replicates it to Follower nodes. The Leader responds to the client only after the log entry is replicated to a majority of nodes and marked as committable. This design effectively reduces potential latency caused by large transactions.
In TDSQL Boundless, there is almost no delay in the apply operations between primary and secondary nodes. The only potential latency occurs when Follower nodes wait for the Leader node to send the next log entry (or heartbeat) to receive the committable index. However, this interval is typically minimal and negligible.
Additionally, TDSQL Boundless imposes limitations on transaction sizes, particularly for deletion operations. It is recommended to split large transactions into multiple smaller ones. This helps prevent individual transactions from consuming excessive resources and reduces their impact on system performance.
In summary, when TDSQL Boundless is compared with InnoDB, the Raft protocol synchronization mechanism of TDSQL Boundless delivers lower primary/standby latency when handling large transactions, particularly in high-concurrency and large-data-volume scenarios.

If a Transaction Is Eventually Rolled Back, What Will Happen to the Logs That Have Been Copied to Follower Nodes?

In the Raft protocol, if a transaction eventually needs to be rolled back, the log entries that have been copied to follower nodes will not be applied to the state machine, meaning that they will not affect the underlying data storage. This is because these log entries are deemed as "uncommitted".
The Raft protocol uses the following mechanism to ensure that uncommitted log entries are not incorrectly applied:
Maintain commitIndex: The Raft protocol uses a variable named commitIndex to track the maximum index of committed log entries. Only when a log entry's index is greater than commitIndex is it considered committed. If a transaction is rolled back, commitIndex remains unchanged, thus preventing uncommitted log entries from being applied to the state machine.
Log truncation: In some cases, for example, during a failover, a newly elected leader node may need to truncate its log to ensure cluster consistency. The new leader node deletes "uncommitted" entries in the log and synchronizes these changes to follower nodes. In this way, log entries related to rollback transactions are removed from the entire cluster.
Through these mechanisms, the Raft protocol ensures data consistency and integrity of the cluster even in the case of transaction rollback.

What Impact Does Compaction Have on the Performance of TDSQL Boundless

The Compaction process in TDSQL Boundless primarily involves two actions: reading files from the upper level, performing sort-merge operations, and then writing the results to files at the next or subsequent levels. This process mainly consumes CPU and I/O resources. Therefore, as long as the system has sufficient resources, Compaction itself generally does not significantly impact business performance—or may even have no effect at all.
Additionally, due to its inherent distributed architecture, TDSQL Boundless leverages resources across all nodes for Compaction. This differs from traditional master-slave architectures where typically only the primary database's resources handle read/write operations (excluding read/write separation scenarios). Consequently, the distributed nature of TDSQL Boundless effectively mitigates the need to reserve additional resources specifically for Compaction.
Concerns about the impact of compaction on performance largely stem from the relatively simple compaction policy in early RocksDB implementations, which triggered lasting concerns about the impact of compaction. However, with continuous version iterations, compaction policies have been greatly optimized, making the impact on performance more controllable.

How Is the Support Status of Disaster Recovery Capability for TDSQL Boundless

TDSQL Boundless offers diversified high-availability technologies, including disaster recovery via multiple replicas within instances and disaster recovery via physical standby databases between instances.
Disaster recovery via multiple replicas within instances.
Three replicas in the same IDC: Three replicas in the same IDC form an instance, which can prevent minority node failures but cannot prevent IDC-level failures.
Intra-city three-replica and three-IDC: For scenarios with three IDCs for one city. Three IDCs in the same city form an instance (each IDC is an AZ), with network latency between IDCs generally ranging from 0.5ms to 2ms. This can prevent minority node failures and single IDC failures, but cannot prevent city-level failures.
Disaster recovery via physical standby databases between instances: Currently, TDStore has already possessed the intra-city and cross-city disaster recovery capabilities. It is able to synchronize data and switch primary-standby databases between two entirely independent instances. When the primary database becomes unavailable due to planned or unplanned events, the standby database can take over the service.
Note:
1. Applications that handle key business have high requirements for business continuity. For this reason, they should be at the same disaster recovery level as the database. Otherwise, business will be affected in case of a failure even if the database can quickly recover.
2. If you choose disaster recovery via physical standby databases between instances, note that: Data synchronization between primary/standby instances is near real-time. Two switching modes are available: switchover (planned switch) and failover (switch when a failure is detected). Switchover can avoid losses, while failover causes losses (when failover is executed forcibly, it may typically incur under 5 seconds of data losses, depending on the actual lag of synchronization of the standby instance).

Does TDSQL Boundless Support JSON?

JSON is supported, as well as the two aggregate functions JSON_ARRAYAGG and JSON_OBJECTAGG. (Currently consistent with MySQL.)

Does TDSQL Boundless Support Foreign Keys and Global Indexes?

TDStore does not support foreign keys and global indexes. If you need to confirm whether migrating instances involves these features, contact technical support personnel to obtain a scan tool for judgment.

TDSQL Boundless supports access via public network/public network

For security and performance considerations, TDSQL Boundless instances currently only support access from within the VPC private network.

Why the Jump Phenomenon Occurs in Auto-Increment Fields of TDSQL Boundless

In TDSQL Boundless databases, auto-increment fields currently only guarantee global uniqueness, not global auto-incrementing.
To improve the allocation efficiency of auto-increment field values, TDSQL Boundless adopts a sharded caching mechanism. For example, if there are three compute nodes, they might cache a segment of consecutive auto-increment values respectively:
Node A caches the auto-increment range 1-100;
Node B caches the auto-increment range 101-200;
Node C caches the auto-increment range 201-300.
When the cached value of node A is exhausted, it will obtain the next available auto-increment range, such as 301-400. This mechanism ensures that auto-increment values can be quickly assigned even in a distributed environment, but it may also cause auto-increment value jumps because the cached values between different nodes are discontinuous.

How to Connect to TDSQL Boundless Database Using Lighthouse

Lighthouse uses the VPC automatically assigned by Tencent Cloud for network isolation. By default, the private network does not interconnect with other Tencent Cloud resources in the VPC, such as cloud databases. Interconnection requires association with CCN. See Lighthouse Private Network Connectivity Description and Private Network Interconnection.

What Should I Do If the System Prompts "Instance Version Verification Error. Upgrade the Kernel to the Latest Version and Try Again"?

TDSQL Boundless kernels are continuously iterating and upgrading. If you encounter the above system prompt, submit a ticket to contact Tencent Cloud engineers to have the engine kernel upgraded.

Bantuan dan Dukungan

Apakah halaman ini membantu?

masukan