tencent cloud

TDSQL Boundless

Release Notes
Product Introduction
Overview
Scenarios
Product Architecture
Instance Types
Compatibility Notes
Usage specification recommendations
Kernel Features
Kernel Overview
Kernel Version Release Notes
Functionality Features
Performance Features
Billing
Billing Overview
Purchase Method
Pricing Details
Renewal
Overdue Payments
Refund
Getting Started
Creating an Instance
Connect to Instances
User Guide
Data Migration
Data Subscription
Instance Management
Parameter Configuration
Account Management
Security Group
Backup and Restoration
Database Auditing
Tag Management
Use Cases
Technical Evolution and Usage Practices of Online DDL
Lock Mechanism Analysis and Troubleshooting Practices
Data Intelligent Scheduling and Related Practices for Performance Optimization
TDSQL Boundless Selection Guide and Practical Tutorial
Developer Guide
Developer Guide (MySQL Compatibility Mode)
Developer Guide (HBase Compatibility Mode)
Performance Tuning
Performance Tuning Overview
SQL Tuning
DDL Tuning
Performance White Paper
Performance Overview
TPC-C Test
Sysbench Test
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Security Group APIs
Task APIs
Backup APIs
Rollback APIs
Parameter APIs
Database APIs
Data Types
Error Codes
General Reference
System Architecture
SQL Reference
Database Parameter Description
TPC-H benchmark data model reference
Error Code Information
Security and Compliance
FAQs
Agreements
Service Level Agreement
Terms of Service
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Creating Indexes

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-03-06 18:48:24

Overview

Indexes are the sorting of data, and TDSQL Boundless uses indexes to quickly locate data. This section provides the syntax and examples for creating indexes.
Creating secondary indexes in TDSQL Boundless is an online operation by default and does not block read and write operations on the table. For detailed information on Online DDL, see Online DDL Notes.

Create an Index on an Existing Table

# Method 1: Using the CREATE INDEX statement
CREATE [UNIQUE] INDEX
index_name ON tbl_name (column_names)
[index_option]
[algorithm_option];

# Method 2: Using the ALTER TABLE statement
ALTER TABLE tbl_name ADD
{ [UNIQUE] {INDEX | KEY}
| PRIMARY KEY
}
index_name (column_names)
[index_option]
[algorithm_option];

index_option: {
| COMMENT 'string'
| {VISIBLE | INVISIBLE}
}

algorithm_option:
ALGORITHM [=] {DEFAULT | INPLACE | COPY}
Parameter description
Index option (index_option)
COMMENT 'string': Add a comment for the index.
VISIBLE | INVISIBLE: Sets whether the index is visible to the optimizer.
Algorithm option (algorithm_option)
ALGORITHM [=] {DEFAULT | INPLACE | COPY}: Specifies the algorithm for index creation.
DEFAULT: The system automatically selects the optimal algorithm.
INPLACE: Online creation without blocking read and write operations (recommended).
COPY: Creates an index by copying table data without blocking read and write operations by default.
Use Case
# Create a test table
CREATE TABLE sbtest1 (id int, v1 int, v2 int, v3 int, v4 int);

# As of iteration 21.2.3, online add pk is not yet supported.
ALTER TABLE sbtest1 ADD PRIMARY KEY(id), ALGORITHM = COPY;
ERROR 8528 (HY000): Online alter table tdsql.sbtest1 failed with 'Not support table without primary key', please set variable 'tdsql_use_online_copy_ddl' to 'false' if no write during alter is acceptable.

# Display index settings for COMMENT, visibility, and algorithm
CREATE UNIQUE INDEX idx_v1 ON sbtest1 (v1) COMMENT 'v1_index' INVISIBLE ALGORITHM = INPLACE;
ALTER TABLE sbtest1 ADD INDEX idx_v2 (v2) COMMENT 'v2_index' VISIBLE, ALGORITHM = INPLACE;

# Uses the INPLACE algorithm by default
CREATE UNIQUE INDEX idx_v4 ON sbtest1 (v4);
ALTER TABLE sbtest1 ADD INDEX idx_v3 (v3);

Creating an Index While Creating a New Table

For details, see Creating a Table.

Recommendations for Creating Indexes on Large-Volume Tables

The Fast Online DDL capability of TDSQL Boundless, by combining parallel processing and bypass writing, makes DDL operations more efficient and convenient.
However, if we fail to correctly distinguish between large/small tables or implement appropriate partitioning based on data scale, the execution efficiency of Fast Online DDL may be significantly compromised. This occurs because when a large table lacks proper partitioning, data tends to concentrate on a single node, forcing DDL operations to execute serially on that single node rather than being parallelized across multiple nodes, which substantially reduces execution efficiency.
Only by reasonably utilizing partitioned tables based on data scale can the distributed scalability of Fast Online DDL be fully leveraged.
Partitioning Recommendations:
1. TDSQL Boundless is 100% compatible with native MySQL partitioned table syntax, supporting first/second-level partitioning. It is primarily designed to address: (1) the capacity issues of large tables; (2) the performance issues under high-concurrency access.
2. Large table capacity issues: If a single table is expected to exceed the data disk capacity of a single node in the future, it is recommended to create first-level hash or key partitioning to evenly distribute data across multiple nodes. If data volume continues to grow, elastic scaling can be used to "progressively reduce disk usage".
3. Performance issues under high-concurrency access: For TP services experiencing high-concurrency access, if a single node's performance is expected to be insufficient to handle excessive read/write pressure, it is also recommended to create first-level hash or key partitioning to evenly distribute the read/write load across multiple nodes.
4. For partitioned tables created in Point 2 and Point 3, it is recommended to select fields that satisfy most core business queries as the partition key based on business characteristics, and the number of partitions should be a multiple of the number of instance nodes.
5. If there is a need for data cleanup, you can create a RANGE partitioned table and use the truncate partition command for quick data cleanup. To also distribute data while achieving cleanup, you can further create a partitioned table with secondary HASH partitioning.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백