tencent cloud

TDSQL Boundless

Release Notes
Product Introduction
Overview
Scenarios
Product Architecture
Instance Types
Compatibility Notes
Usage specification recommendations
Kernel Features
Kernel Overview
Kernel Version Release Notes
Functionality Features
Performance Features
Billing
Billing Overview
Purchase Method
Pricing Details
Renewal
Overdue Payments
Refund
Getting Started
Creating an Instance
Connect to Instances
User Guide
Data Migration
Data Subscription
Instance Management
Parameter Configuration
Account Management
Security Group
Backup and Restoration
Database Auditing
Tag Management
Use Cases
Technical Evolution and Usage Practices of Online DDL
Lock Mechanism Analysis and Troubleshooting Practices
Data Intelligent Scheduling and Related Practices for Performance Optimization
TDSQL Boundless Selection Guide and Practical Tutorial
Developer Guide
Developer Guide (MySQL Compatibility Mode)
Developer Guide (HBase Compatibility Mode)
Performance Tuning
Performance Tuning Overview
SQL Tuning
DDL Tuning
Performance White Paper
Performance Overview
TPC-C Test
Sysbench Test
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Security Group APIs
Task APIs
Backup APIs
Rollback APIs
Parameter APIs
Database APIs
Data Types
Error Codes
General Reference
System Architecture
SQL Reference
Database Parameter Description
TPC-H benchmark data model reference
Error Code Information
Security and Compliance
FAQs
Agreements
Service Level Agreement
Terms of Service
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Data Insertion

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-03-30 10:13:46
This article describes how to use SQL statements to insert data in TDSQL Boundless databases, covering common operations such as single-row insertion, multi-row insertion, and specified-column insertion, along with performance optimization tips and considerations for data insertion.

Prerequisites

Before performing data insertion operations, ensure that the following conditions have been met:
The TDSQL Boundless instance has been created and the connection has been established. See Connecting to the Database.
The target database and table have been created. See Table Creation.
The current user has the INSERT privilege on the target table.

Methods of data insertion

TDSQL Boundless is compatible with the MySQL protocol and supports standard SQL INSERT statements for data insertion. The following are several commonly used insertion methods.

Use INSERT INTO...VALUES to insert single-row data

INSERT INTO...VALUES is the most basic data insertion method, suitable for inserting a single row of data into a table.
Syntax Format
INSERT INTO table_name (column1, column2, ..., columnN)
VALUES (value1, value2, ..., valueN);
Parameter description
Parameter
Description
table_name
Target table name
column1, column2, ..., columnN
Target column name list, with column names separated by commas
value1, value2, ..., valueN
List of values corresponding one-to-one with column names
Example
The following example creates an employee information table and inserts a record into it.
-- Create employee information table
CREATE TABLE employees (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) NOT NULL,
department VARCHAR(50),
salary DECIMAL(10, 2),
hire_date DATE
);

-- Insert an employee record
INSERT INTO employees (name, department, salary, hire_date)
VALUES ('Zhang San', 'Technology Department', 15000.00, '2026-01-15');
After successful execution, the returned result is as follows:
Query OK, 1 row affected (0.01 sec)

Use INSERT INTO...VALUES to insert multiple rows of data

If you need to insert multiple rows of data simultaneously, you can specify multiple sets of VALUES values in a single INSERT statement. Multi-row insertion is typically more efficient than executing multiple single-row insert statements row by row, as it reduces the number of network interactions between the client and server.
Syntax Format
INSERT INTO table_name (column1, column2, ..., columnN)
VALUES
(value1_1, value1_2, ..., value1_N),
(value2_1, value2_2, ..., value2_N),
...
(valueM_1, valueM_2, ..., valueM_N);
Example
INSERT INTO employees (name, department, salary, hire_date)
VALUES
('Li Si', 'Product Department', 13000.00, '2026-02-01'),
('Wang Wu', 'Technology Department', 16000.00, '2026-02-15'),
('Zhao Liu', 'Marketing Department', 12000.00, '2026-03-01');
After successful execution, the returned result is as follows:
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0
Note:
The size of a single statement for multi-row insertion is limited by the max_allowed_packet parameter. If the data volume to be inserted is large, it is recommended to execute the operation in batches appropriately.

Insert data without specifying column names

If the values provided in the VALUES clause exactly match the order and number of columns in the table, the column name list can be omitted.
Syntax Format
INSERT INTO table_name VALUES (value1, value2, ..., valueN);
Example
INSERT INTO employees VALUES (NULL, 'Sun Qi', 'Operations Department', 11000.00, '2026-03-10');
Note:
When column names are omitted, the values in VALUES must be provided in full accordance with the order and number of columns in the table structure. For AUTO_INCREMENT columns, NULL or 0 can be used to allow the system to auto-generate values. This approach is not recommended in production environments, as table structure changes may cause the statement to fail.

Using INSERT INTO...SET to insert data

INSERT INTO...SET syntax assigns values column by column in the form of "column_name = value", suitable for scenarios where only specific columns need values. Unspecified columns will use default values or NULL.
Syntax Format
INSERT INTO table_name SET column1 = value1, column2 = value2, ...;
Example
INSERT INTO employees SET name = 'Zhou Ba', department = 'Technical Department', salary = 14000.00, hire_date = '2026-03-15';

The INSERT INTO...SELECT statement is used to insert data from another table

INSERT INTO...SELECT statement is used to insert query results from one table into another, suitable for inter-table data migration or data aggregation.
Syntax Format
INSERT INTO target_table (column1, column2, ..., columnN)
SELECT column1, column2, ..., columnN
FROM source_table
WHERE condition;
Example
The following example inserts employee data from the Technical Department into an archive table.
-- Create an archive table
CREATE TABLE employees_archive LIKE employees;

-- Insert Technical Department employee data into the archive table
INSERT INTO employees_archive (name, department, salary, hire_date)
SELECT name, department, salary, hire_date
FROM employees
WHERE department = 'Technical Department';
Note:
INSERT INTO...SELECT may impose long read locks on the source table in scenarios with large data volumes. It is recommended to execute during off-peak business hours.

Handling primary key or unique key conflicts during insertion

When the target table has PRIMARY KEY or UNIQUE KEY constraints, inserting duplicate data will result in errors. TDSQL Boundless provides the following methods to avoid such errors.

Use INSERT IGNORE to ignore conflicts

INSERT IGNORE ignores the conflicting row when a primary key or unique key conflict is encountered, does not insert that row of data, and continues with subsequent insert operations.
Syntax Format
INSERT IGNORE INTO table_name (column1, column2, ...)
VALUES (value1, value2, ...);
Example
-- Assume the record with id=1 already exists
INSERT IGNORE INTO employees (id, name, department, salary, hire_date)
VALUES (1, 'new employee', 'Technical Department', 20000.00, '2026-03-20');
After successful execution, the returned result is as follows:
Query OK, 0 rows affected, 1 warning (0.00 sec)
Note:
INSERT IGNORE does not return an error but generates a warning. SHOW WARNINGS can be used to view specific conflict information.

Use INSERT...ON DUPLICATE KEY UPDATE to update on conflict

INSERT...ON DUPLICATE KEY UPDATE does not ignore the row when a primary key or unique key conflict is encountered. Instead, it performs the update operation specified in the UPDATE clause. This statement is suitable for "insert or update" scenarios.
Syntax Format
INSERT INTO table_name (column1, column2, ..., columnN)
VALUES (value1, value2, ..., valueN)
ON DUPLICATE KEY UPDATE column1 = new_value1, column2 = new_value2, ...;
Example
INSERT INTO employees (id, name, department, salary, hire_date)
VALUES (1, 'Zhang San', 'Technical Department', 18000.00, '2026-01-15')
ON DUPLICATE KEY UPDATE salary = 18000.00;
If the record with id=1 exists, after this statement is executed, the salary field will be updated to 18000.00.
Note:
When multiple unique keys exist in the table, INSERT...ON DUPLICATE KEY UPDATE may match an unexpected unique key, resulting in updates to unintended rows. It is recommended to use this syntax only on tables with a single unique key.

Use REPLACE INTO to replace data

REPLACE INTO deletes the old row and inserts a new one when a primary key or unique key conflict is encountered.
Syntax Format
REPLACE INTO table_name (column1, column2, ..., columnN)
VALUES (value1, value2, ..., valueN);
Example
REPLACE INTO employees (id, name, department, salary, hire_date)
VALUES (1, 'Zhang San', 'Technical Department', 20000.00, '2026-01-15');
Warning:
REPLACE INTO first deletes the old row and then inserts a new row, which is equivalent to performing a DELETE and an INSERT operation. Please confirm whether the business logic allows deleting the old row before using it.

Batch import of data

When large volumes of data are imported, executing INSERT statements row by row is inefficient. TDSQL Boundless supports efficient batch data import through the following methods.

Use LOAD DATA INFILE to import

LOAD DATA INFILE reads data from a text file and imports it in batches into the target table, which is suitable for large-volume data import scenarios. Compared to row-by-row INSERT, this statement can improve import speed by tens of times.
Syntax Format
LOAD DATA INFILE 'file_path'
INTO TABLE table_name
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\\n'
IGNORE 1 ROWS;
Parameter description
Parameter
Description
file_path
Data file path
FIELDS TERMINATED BY
Field delimiter, for example , indicating comma-separated values
ENCLOSED BY
Field value enclosure, for example " indicating double quotes
LINES TERMINATED BY
Line separator, for example \\n representing a newline character
IGNORE 1 ROWS
Skip the first row of the file (typically the header row)
Example
LOAD DATA INFILE '/tmp/employees.csv'
INTO TABLE employees
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\\n'
IGNORE 1 ROWS
(name, department, salary, hire_date);
Note:
When LOAD DATA INFILE is used, ensure that the current user has the FILE privilege and that the data file path is accessible by the database server.

Use multi-row INSERT to perform batch insertion

When it is inconvenient to use LOAD DATA INFILE, batch insertion can be achieved by including multiple VALUES clauses within a single INSERT statement.
Example
INSERT INTO employees (name, department, salary, hire_date) VALUES
('Employee A', 'Technology Department', 15000.00, '2026-01-01'),
('Employee B', 'Product Department', 13000.00, '2026-01-02'),
('Employee C', 'Marketing Department', 12000.00, '2026-01-03'),
...
('Employee N', 'Operations Department', 11000.00, '2026-01-30');
Note:
It is recommended that each INSERT statement contain 100 - 1000 rows of data to avoid excessively large individual statements. You can adjust the maximum allowed size for a single statement by modifying the max_allowed_packet parameter.

Auto-increment Columns & Insertion

TDSQL Boundless supports the AUTO_INCREMENT auto-increment column. When data is inserted, auto-increment columns can be handled in the following ways.

Automatically generated auto-increment values

When data is inserted, omit the auto-increment column or specify a NULL value for it, and the system will automatically assign an incrementing unique value.
-- Omit the auto-increment column
INSERT INTO employees (name, department, salary, hire_date)
VALUES ('Test Employee', 'Technology Department', 10000.00, '2026-03-19');

-- Specify NULL for the auto-increment column
INSERT INTO employees (id, name, department, salary, hire_date)
VALUES (NULL, 'Test Employee 2', 'Technology Department', 10000.00, '2026-03-19');

Explicitly specify the auto-increment value

Users can also explicitly assign a value to the auto-increment column. If the specified value does not conflict with existing records, the specified value will be used.
INSERT INTO employees (id, name, department, salary, hire_date)
VALUES (1000, 'Specified ID Employee', 'Technology Department', 10000.00, '2026-03-19');
Note:
After explicitly assigning an auto-increment value, the system's auto-increment counter will automatically adjust to the current maximum value plus one. Subsequent auto-assigned values will continue incrementing from that value.

Performance Optimization Suggestions

When performing data insertion operations, you can refer to the following recommendations to improve write performance.

Use multi-row INSERT instead of single-row INSERT

Multi-row INSERT can reduce the number of SQL statement parses and network round trips, offering significant performance advantages in batch insertion scenarios.
Recommended Practice
INSERT INTO employees (name, department, salary, hire_date) VALUES
('Employee 1', 'Technology Department', 15000.00, '2026-01-01'),
('Employee 2', 'Product Department', 13000.00, '2026-01-02'),
('Employee 3', 'Marketing Department', 12000.00, '2026-01-03');
Not Recommended Practice
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 1', 'Technology Department', 15000.00, '2026-01-01');
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 2', 'Product Department', 13000.00, '2026-01-02');
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 3', 'Marketing Department', 12000.00, '2026-01-03');

Properly use transactions

When large amounts of data are inserted in batches, it is recommended to place multiple INSERT statements within a single transaction for unified commit, to avoid the additional overhead caused by committing each statement individually.
START TRANSACTION;
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 1', 'Technology Department', 15000.00, '2026-01-01');
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 2', 'Product Department', 13000.00, '2026-01-02');
-- More INSERT statements
COMMIT;
Note:
The data volume in a single transaction should not be too large. If the data volume to be inserted is large, it is recommended to submit in batches, with 1000 - 10000 rows per batch. An excessively large transaction may cause lock wait timeout or excessive memory usage.

Temporarily disable autocommit

When importing data in batches, you can temporarily disable the autocommit mode to reduce the commit overhead for each statement.
SET autocommit = 0;

INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 1', 'Technology Department', 15000.00, '2026-01-01');
INSERT INTO employees (name, department, salary, hire_date) VALUES ('Employee 2', 'Product Department', 13000.00, '2026-01-02');
-- More INSERT statements

COMMIT;
SET autocommit = 1;

Use LOAD DATA INFILE to import large amounts of data

For million-level data import scenarios, it is recommended to use LOAD DATA INFILE instead of INSERT statements. LOAD DATA INFILE performs batch writing by directly reading text files, offering significantly higher performance than row-by-row INSERT.

Limitations

When using the INSERT statement, please note the following:
INSERT statements are automatically committed in the default autocommit = 1 mode. To enable rollback, use START TRANSACTION or BEGIN to start a transaction before executing the INSERT statement.
When data is inserted, the data type must match the column definition. For example, inserting string values into an INT column will result in implicit type conversion or an error.
String values to be inserted need to be enclosed in English single quotes. Numeric data does not require quotation marks.
TDSQL Boundless disables triggers by default. When triggers exist on a table, INSERT operations will not trigger the corresponding BEFORE INSERT and AFTER INSERT triggers.
When data is inserted, if it violates NOT NULL constraints, PRIMARY KEY constraints, UNIQUE KEY constraints, or FOREIGN KEY constraints, the system will return an error and abort the execution of the current statement.
When INSERT IGNORE is used, rows that violate constraints will be skipped without aborting the execution of the entire statement.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백