Dynamic Release Record (2026)
Type | Version |
Oracle | 11, 12, 19 |
sqlplus sys/password@host:port/SID AS SYSDBA;CREATE USER flinkuser IDENTIFIED BY flinkpw DEFAULT TABLESPACE LOGMINER_TBS QUOTA UNLIMITED ON LOGMINER_TBS;-- Allows users to connect to the databaseGRANT CREATE SESSION TO flinkuser;-- Grants users SELECT privileges on all tables in the database to read dataGRANT SELECT ANY TABLE TO flinkuser;--SELECT_CATALOG_ROLE is a pre-defined database role that provides permissions to view database system tables. Additionally, during the incremental phase, the logminer session relies on this permission to query table structures.GRANT SELECT_CATALOG_ROLE TO flinkuser;--GRANT ANALYZE ANY is an authorization statement granting users the ability to analyze all tables, indexes, partitions, and table functions owned by any user. When a table lacks a primary key, this permission is relied upon to obtain unique indexes and use them as primary key substitutes.GRANT ANALYZE ANY TO flinkuser;
sqlplus sys/password@host:port/SID AS SYSDBA
alter system set db_recovery_file_dest_size = 10G;alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;shutdown immediate;startup mount;alter database archivelog;alter database open;Notes: Enabling log archiving requires a database restart and will use disk space
-- Should now "Database log mode: Archive Mode"archive log list;
-- Enable supplemental logging for a specific table:ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/SID/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE 2048M;
--A special grant statement that gives the user SELECT permission on the V$DATABASE view, primarily used to check whether the database's archived log is enabled (LOG_MODE) and to obtain the current database's SCN locus.GRANT SELECT ON V_$DATABASE to flinkuser;--EXECUTE_CATALOG_ROLE is a predefined role that primarily provides the ability to allow logminer to write the dictionary into the binlog, so changes to the table structure are also recorded in the logical log.GRANT EXECUTE_CATALOG_ROLE TO flinkuser;--Grant users permission to read data through the LOGMINING component, no need to execute in 12c and lower versions.GRANT LOGMINING TO flinkuser;--EXECUTE ON DBMS_LOGMNR permission is used to allow the user to execute programs within the DBMS_LOGMNR package. The DBMS_LOGMNR package is used to analyze and extract changes in the database log files.GRANT EXECUTE ON DBMS_LOGMNR TO flinkuser;--GRANT EXECUTE ON DBMS_LOGMNR_D is a grant statement that gives the user execute permissions on the DBMS_LOGMNR_D package. DBMS_LOGMNR_D is an Oracle-provided package used to analyze online and archived redo log files at the database levelGRANT EXECUTE ON DBMS_LOGMNR_D TO flinkuser;--The data query interface of logminer consists of a set of system-provided views. To read data from logminer, you need to grant access permissions to the following views.GRANT SELECT ON V_$LOG TO flinkuser;GRANT SELECT ON V_$LOG_HISTORY TO flinkuser;GRANT SELECT ON V_$LOGMNR_LOGS TO flinkuser;GRANT SELECT ON V_$LOGMNR_CONTENTS TO flinkuser;GRANT SELECT ON V_$LOGMNR_PARAMETERS TO flinkuser;GRANT SELECT ON V_$LOGFILE TO flinkuser;GRANT SELECT ON V_$ARCHIVED_LOG TO flinkuser;GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO flinkuser;exit;
Parameter | Description |
Data Source | Select the Oracle data source to synchronize |
Source Table | According to business requirements, choose "All Database Tables", "Specified Table", or "Specified Database" |
| All Database Tables: Monitor all databases under the data source. Newly added databases and tables during task execution will be synchronized to the target by default Specified Table: Synchronize only the specified table Designated library: Monitor specific libraries and schemas, and synchronize all or rule-compliant tables under the schema |
| |
| |
Read mode | Full + Incremental: Data synchronization is divided into full and incremental synchronization phases. After the full phase is completed, the task enters the incremental phase. The full phase will synchronize historical data in the database, and the incremental phase will start synchronizing from the binlog CDC location after the task starts. Incremental: Sync data only from the binlog CDC locus after the task starts. |
Consistency Semantics | It only represents the consistency semantics of the reading end. Supports At-least-once and Exactly-once. At-least-once: Data may be read repeatedly, and relies on the target end to support deduplication to ensure data consistency. Suitable for scenarios with a large volume of data in the full stage and non-numeric primary keys, requiring high synchronization performance. Exactly-once: Data is read strictly once, with some performance loss, and does not support tables without primary keys and unique index columns. Suitable for general scenarios where the source table has numeric primary keys or unique index columns. The two modes are incompatible in the current version. If the mode is changed after task submission, stateful restart is not supported. |
Advanced Settings (optional) | You can configure parameters based on business requirements |
Parameter | Description |
Data Source | Select the data source where the table to be synced is located. |
Database | Support selection or manual input of the library name to be read. By default, the database bound to the data source is used as the default database. Other databases need to be manually entered. If the data source network is not connected and the database information cannot be pulled directly, you can manually enter the database name. Data synchronization can still be performed when the Data Integration network is connected. |
Schema | Support selection or manual input of the available Schemas under this data source. |
Table | Support selecting or manually entering the Table Name to be read. |
Read mode | Full + Incremental: Data synchronization is divided into full and incremental synchronization phases. After the full phase is completed, the task enters the incremental phase. The full phase will synchronize historical data in the database, and the incremental phase will start synchronizing from the binlog CDC location after the task starts. Increment only: Only sync data starting from the binlog cdc point after the task starts. |
Consistency Semantics | It only represents the consistency semantics of the reading end. Supports At-least-once and Exactly-once. At-least-once: Data may be read repeatedly, and relies on the target end to support deduplication to ensure data consistency. Suitable for scenarios with a large volume of data in the full stage and non-numeric primary keys, requiring high synchronization performance. Exactly-once: Data is read strictly once, with some performance loss, and does not support tables without primary keys and unique index columns. Suitable for general scenarios where the source table has numeric primary keys or unique index columns. The two modes are incompatible in the current version. If the mode is changed after task submission, stateful restart is not supported. |
Advanced Settings (Optional) | You can configure parameters according to business needs. |
Parameter | Description |
Data Source | The Oracle data source to be written to is required. |
Database | Supports selection, or manual input of the library name to be written to By default, the database bound to the data source is used as the default database. Other databases need to be manually entered. If the data source network is not connected and the database information cannot be pulled directly, you can manually enter the database name. Data synchronization can still be performed when the Data Integration network is connected. |
Schema | Supports selection or manual input of the Oracle data pattern to be written. |
Table | Supports selecting or manually entering the name of the table to be written to. If the data source network is not connected and the table information cannot be pulled directly, the table name can be manually entered. Data synchronization can still be performed when the Data Integration network is connected. |
Primary key | Select a field as the primary key for the write table. |
Advanced Settings (optional) | You can configure parameters according to business needs. |
Oracle Type | Internal Type |
NUMBER(p, s <= 0), p - s < 3 | TINYINT |
NUMBER(p, s <= 0), p - s < 5 | SMALLINT |
NUMBER(p, s <= 0), p - s < 10 | INT |
NUMBER(p, s <= 0), p - s < 19 | BIGINT |
NUMBER(p, s <= 0), 19 <= p - s <= 38 | DECIMAL(p - s, 0) |
NUMBER(p, s > 0) | DECIMAL(p, s) |
NUMBER(p, s <= 0), p - s > 38 | STRING |
FLOAT,BINARY_FLOAT | FLOAT |
DOUBLE PRECISION,BINARY_DOUBLE | DOUBLE |
NUMBER(1) | BOOLEAN |
DATE,TIMESTAMP [(p)] | TIMESTAMP [(p)] [WITHOUT TIMEZONE] |
TIMESTAMP [(p)] WITH TIME ZONE | TIMESTAMP [(p)] WITH TIME ZONE |
TIMESTAMP [(p)] WITH LOCAL TIME ZONE | TIMESTAMP_LTZ [(p)] |
CHAR(n),NCHAR(n),NVARCHAR2(n),VARCHAR(n),VARCHAR2(n),CLOB,NCLOB,XML Type | STRING |
BLOB,ROWID | BYTES |
INTERVAL DAY TO SECOND,INTERVAL YEAR TO MONTH | BIGINT |
Internal Type | Oracle Type |
FLOAT | BINARY_FLOAT |
DOUBLE | BINARY_DOUBLE |
DECIMAL(p, s) | SMALLINT,FLOAT(s),DOUBLE PRECISION,REAL,NUMBER(p, s) |
DATE | DATE |
DECIMAL(20, 0) | - |
FLOAT | REAL,FLOAT4 |
DOUBLE | FLOAT8,DOUBLE PRECISION |
DECIMAL(p, s) | NUMERIC(p, s),DECIMAL(p, s) |
BOOLEAN | BOOLEAN |
DATE | DATE |
TIMESTAMP [(p)][WITHOUT TIMEZONE] | TIMESTAMP [(p)]WITHOUT TIMEZONE |
STRING | CHAR(n),VARCHAR(n),CLOB(n) |
BYTES | RAW(s),BLOB |
ARRAY | - |
Apakah halaman ini membantu?
Anda juga dapat Menghubungi Penjualan atau Mengirimkan Tiket untuk meminta bantuan.
masukan