The solution for data importing from one distributed database to another is special. mysqldump is used as an example below to summarize the importing steps:
Purchase a Linux CVM instance and run yum install mariadb-server
to install mysqldump.
mysqldump --compact --single-transaction -d -uxxx -pxxx -hxxx.xxx.xxx.xxx -Pxxxx db_name table_name > schema.sql
Note:
Please select the
db_name
andtable_name
parameters as needed.
Export data by using mysqldump:
Set the net_write_timeout
parameter in the parameter settings in the TDSQL for MySQL Console: set global net_write_timeout=28800
mysqldump --compact --single-transaction --no-create-info -c -uxxx -pxxx -hxxx.xxx.xxx.xxx -Pxxxx db_name table_name > data.sql
Note:
The
db_name
andtable_name
parameters should be selected as needed. If the exported data is to be imported into another set of TDSQL for MySQL environment, the-c
option must be added, and there should be a space between-c
anddb_name
.
mysql --default-character-set=utf8 -uxxx -pxxx -hxxx.xxx.xxx.xxx -Pxxxx -e "create database dbname;";
-u
is a keyword).-p
is a keyword).mysql --default-character-set=utf8 -uxxx -pxxx -hxxx.xxx.xxx.xxx -Pxxxx dbname < schema.sql
-u
is a keyword).-p
is a keyword).mysql --default-character-set=utf8 -uxxx -pxxx -hxxx.xxx.xxx.xxx -Pxxxx dbname < data.sql
Note:
If an auto-increment field is used in the source table, and the "Column 'xx' specified twice" error occurs during import, the
schema.sql
needs to be processed.
Remove the back quotes from the auto-increment field (cat schema.sql | tr "" " " > schema_tr.sql), drop database, and repeat steps 3–5 by using the processed
schema_tr.sql`.
Was this page helpful?