tencent cloud

TencentDB for PostgreSQL

Release Notes and Announcements
Release Notes
Product Announcements
Product Introduction
Overview
Features
Strengths
Scenarios
Information Security
Regions and AZs
Product Feature List
Large version lifecycle description
MSSQL Compatible Version
Billing
Billing Overview
Instance Type and Specification
Purchase Methods
Refund
Overdue Payments
Backup Space Billing
Database Audit Billing Overview
Getting Started
Creating TencentDB for PostgreSQL Instance
Connecting to TencentDB for PostgreSQL Instance
Managing TencentDB for PostgreSQL Instance
Importing Data
Migrating Data with DTS
Kernel Version Introduction
Kernel Version Overview
Kernel Version Release Notes
Viewing Kernel Version
Proprietary Kernel Features
Database Audit
Audit Service Description
Activating Audit Service
View Audit Logs
Modify audit services
Audit Performance Description
User Guide
Instance Management
Upgrading Instance
CPU Elastic Scaling
Read-Only Instance
Account Management
Database Management
Parameter Management
Log Management and Analysis
Backup and Restoration
Data Migration
Extension Management
Network Management
Access Management
Data Security
Tenant and Resource Isolation
Security Groups
Monitoring and Alarms
Tag
AI Practice
Using the Tencentdb_ai Plug-In to Call Large Models
Building Ai Applications with the Tencentdb Ai Plug-In
Combining Supabase to Quickly Build Backend Service Based on TencentDB for PostgreSQL
Use Cases
postgres_fdw Extension for Cross-database Access
Automatically Creating Partition in PostgreSQL
Searching in High Numbers of Tags Based on pg_roaringbitmap
Querying People Nearby with One SQL Statement
Configuring TencentDB for PostgreSQL as GitLab's External Data Source
Supporting Tiered Storage Based on cos_fdw Extension
Implement Read/Write Separation via pgpool
Implementing Slow SQL Analysis Using the Auto_explain Plugin
Using pglogical for Logical Replication
Using Debezium to Collect PostgreSQL Data
Set Up a Remote Disaster Recovery Environment for PostgreSQL Locally on CVM
Read-Only Instance and Read-Only Group Practical Tutorial
How to Use SCF for Scheduled Database Operations
Fix Table Bloat
Performance White Paper
Test Methods
Test Results
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Read-Only Instance APIs
Backup and Recovery APIs
Parameter Management APIs
Security Group APIs
Performance Optimization APIs
Account APIs
Specification APIs
Network APIs
Data Types
Error Codes
FAQs
Service Agreement
Service Level Agreement
Terms of Service
Glossary
Contact Us

Incremental Migration Check

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2024-11-04 10:50:08

Check Details

If you select incremental migration as the migration type, you need to check the following conditions; otherwise, the verification will fail.
The wal_level of the source database must be logical.
The max_replication_slots and max_wal_senders parameters in the source database must be greater than the total number of databases to be migrated (retain extra connections).
The persistence attribute of the tables to be migrated in the source database must be p (permanent tables); otherwise, replication is not supported, and hence logical migration is not possible.
The tables to be migrated should not include unlogged tables; otherwise, they cannot be migrated.
It is recommended to migrate tables with primary keys. Otherwise, inconsistent data results may occur. You are advised not to migrate tables without primary keys.
If the table to be migrated has no primary key and does not contain a replica identity (that is, the REPLICA IDENTITY attribute is set to NOTHING), the verification task will report a warning.
If the table to be migrated has no primary key and includes field types that cannot use the = operator (json/point/polygon/txid_snapshot/xml), the verification will fail. You need to modify the table without a primary key as prompted or deselect the primary key table option; otherwise, the task cannot proceed.

Fixing Method

Modifying the wal_level/max_replication_slots/max_wal_senders Parameters

The method to modify the parameters wal_level, max_replication_slots, and max_wal_senders is as follows.
1. Log in to the source database.
Note:
If the source database is a self-built database, you need to log in to the server where the database runs and go to the main directory of the database data, which is usually $PGDATA.
If the source database is in another cloud, modify the parameters as requested by the corresponding cloud vendor.
If you need to modify the parameters of the target instance, submit a ticket through Online Support for assistance.
2. Usually, open the postgresql.conf file and modify the corresponding parameters.
wal_level = logical
max_replication_slots = 10 //Modify according to actual needs.
max_wal_senders = 10 //Modify according to actual needs.
3. After the modification is completed, restart the database instance.
4. Log in to the database instance and run the following command to check whether the parameters are correctly set:
postgres=> select name,setting from pg_settings where name='wal_level';
name | setting
-----------+---------
wal_level | logical
(1 row)
postgres=> select name,setting from pg_settings where name='max_replication_slots';
name | setting
-----------------------+---------
max_replication_slots | 10
(1 row)
postgres=> select name,setting from pg_settings where name='max_wal_senders';
name | setting
-----------------+---------
max_wal_senders | 10
(1 row)
5. Perform the verification task again.

Modifying the REPLICA IDENTITY Attribute of the Table to Be Migrated

It is generally not recommended to migrate tables without primary keys, as this may lead to data inconsistencies. If the table to be migrated has no primary key and does not contain a replica identity (that is, the REPLICA IDENTITY attribute is set to NOTHING), the verification task will report a warning.
If a warning is reported, you are advised to modify the attribute parameters of the table as follows.
ALTER TABLE schemaName.tableName REPLICA IDENTITY FULL;

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백