tencent cloud

Tencent Cloud WeData

Release Notes
Dynamic Release Record (2026)
Product Introduction
Product Overview
Product Advantages
Product Architecture
Product Features
Application Scenarios
Purchase Guide
Billing Overview
Product Version Purchase Instructions
Execute Resource Purchase Description
Billing Modes
Overdue Policy
Refund
Preparations
Overview of Account and Permission Management
Add allowlist /security groups (Optional)
Sign in to WeData with Microsoft Entra ID (Azure AD) Single Sign-On (SSO)
Operation Guide
Console Operation
Project Management
Data Integration
Studio
Data Development
Data Analysis
Data Science
Data Governance (with Unity Semantics)
API Documentation
History
Introduction
API Category
Making API Requests
Smart Ops Related Interfaces
Project Management APIs
Resource Group APIs
Data Development APIs
Data Asset - Data Dictionary APIs
Data Development APIs
Ops Center APIs
Data Operations Related Interfaces
Data Exploration APIs
Asset APIs
Metadata Related Interfaces
Task Operations APIs
Data Security APIs
Instance Operation and Maintenance Related Interfaces
Data Map and Data Dictionary APIs
Data Quality Related Interfaces
DataInLong APIs
Platform Management APIs
Data Source Management APIs
Data Quality APIs
Platform Management APIs
Asset Data APIs
Data Source Management APIs
Data Types
Error Codes
WeData API 2025-08-06
Service Level Agreements
Related Agreement
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Oracle Data Source

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2026-02-10 10:49:51
DataInLong provides Oracle read and write capabilities. This article introduces the pre-environment configuration for real-time data synchronization using Oracle and the current capability support status.

Supported Versions

Currently, DataInLong supports real-time reading and single table writing for both individual Oracle tables and entire databases. The following version limitations must be observed when using real-time synchronization capabilities:
Type
Version
Oracle
11, 12, 19

Use Limits

The Oracle Data Source reads the Oracle Redo Log and Archive Log based on the built-in Log analysis tool LogMiner, replays them at the target, and implements real-time synchronization. The current solution has the following limitations:
The incremental log retention period should be at least 3 days. If the data volume is very large during the full-load phase, extend the incremental log retention period.
Oracle read-only standby databases only support logical standby, not physical standby.
Tables without a primary key may have duplicate data since they cannot guarantee exactly-once. So real-time synchronization tasks should ensure a primary key.
To monitor table field changes on the Oracle side, do not configure the system/sys accounts for the data source. Otherwise, all tables (including newly added tables) will require logging to be enabled for synchronization.
When a DDL adds a column with a default value, it may cause data inconsistency between the table schema obtained by LogMiner and the change log, possibly causing parsing exceptions.
When LogMiner reads the log, DDL and DML sequence confusion may exist, causing data parsing to fail.
LogMiner may lose the completion flag of a transaction, leading to inability to confirm whether to submit the transaction or whether to send the data.
LogMiner may experience large delay when exceeding 5k/s.
When RAC mode is enabled, LogMiner may not detect log file switching, leading to partial data missing.
Note
If unable to accept the above limitations, it is recommendedto use OGG (oracle GoldenGate) to write data to Kafka first, then consume Kafka to write to the target table. This method requires the target table to be created in advance and cannot use real-time synchronization tasks for automatic table creation. If there are DDL changes on the source, manual adjustments are also required on the target.

Database Environment Preparation

Granting Database Permissions

Primarily, this involves creating basic synchronization users and granting permissions to pull existing data. Users must be granted read permissions for required views and tables.
The primary function of the ANALYZE ANY permission is to read table structures and unique indexes. When a table lacks a primary key, a unique index can be used as a substitute.
sqlplus sys/password@host:port/SID AS SYSDBA;
CREATE USER flinkuser IDENTIFIED BY flinkpw DEFAULT TABLESPACE LOGMINER_TBS QUOTA UNLIMITED ON LOGMINER_TBS;
-- Allows users to connect to the database
GRANT CREATE SESSION TO flinkuser;
-- Grants users SELECT privileges on all tables in the database to read data
GRANT SELECT ANY TABLE TO flinkuser;
--SELECT_CATALOG_ROLE is a pre-defined database role that provides permissions to view database system tables. Additionally, during the incremental phase, the logminer session relies on this permission to query table structures.
GRANT SELECT_CATALOG_ROLE TO flinkuser;

--GRANT ANALYZE ANY is an authorization statement granting users the ability to analyze all tables, indexes, partitions, and table functions owned by any user. When a table lacks a primary key, this permission is relied upon to obtain unique indexes and use them as primary key substitutes.
GRANT ANALYZE ANY TO flinkuser;

Enabling Log Archiving

This enables the generation of logical logs. Without this, Oracle cannot correctly generate logs, thereby preventing incremental data synchronization.
1. Use a DBA account to connect.
sqlplus sys/password@host:port/SID AS SYSDBA
2. Enable log archiving. Incremental data cannot be read without enabling it.
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

Notes: Enabling log archiving requires a database restart and will use disk space
3. Check if it is enabled.
-- Should now "Database log mode: Archive Mode"
archive log list;
4. Enable supplemental log.
-- Enable supplemental logging for a specific table:
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Note:
If not enabled, data updates to partial columns on the source can cause other columns on the target to be empty.
5. Create Tablespace
It is recommended to create a separate tablespace for real-time synchronization users, but existing tablespaces can also be reused.
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/SID/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE 2048M;


Grant Permission to Read Logs through Logminer

--A special grant statement that gives the user SELECT permission on the V$DATABASE view, primarily used to check whether the database's archived log is enabled (LOG_MODE) and to obtain the current database's SCN locus.
GRANT SELECT ON V_$DATABASE to flinkuser;

--EXECUTE_CATALOG_ROLE is a predefined role that primarily provides the ability to allow logminer to write the dictionary into the binlog, so changes to the table structure are also recorded in the logical log.
GRANT EXECUTE_CATALOG_ROLE TO flinkuser;

--Grant users permission to read data through the LOGMINING component, no need to execute in 12c and lower versions.
GRANT LOGMINING TO flinkuser;

--EXECUTE ON DBMS_LOGMNR permission is used to allow the user to execute programs within the DBMS_LOGMNR package. The DBMS_LOGMNR package is used to analyze and extract changes in the database log files.
GRANT EXECUTE ON DBMS_LOGMNR TO flinkuser;

--GRANT EXECUTE ON DBMS_LOGMNR_D is a grant statement that gives the user execute permissions on the DBMS_LOGMNR_D package. DBMS_LOGMNR_D is an Oracle-provided package used to analyze online and archived redo log files at the database level
GRANT EXECUTE ON DBMS_LOGMNR_D TO flinkuser;

--The data query interface of logminer consists of a set of system-provided views. To read data from logminer, you need to grant access permissions to the following views.
GRANT SELECT ON V_$LOG TO flinkuser;
GRANT SELECT ON V_$LOG_HISTORY TO flinkuser;
GRANT SELECT ON V_$LOGMNR_LOGS TO flinkuser;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO flinkuser;
GRANT SELECT ON V_$LOGMNR_PARAMETERS TO flinkuser;
GRANT SELECT ON V_$LOGFILE TO flinkuser;
GRANT SELECT ON V_$ARCHIVED_LOG TO flinkuser;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO flinkuser;

exit;

Real-Time Database Synchronization Source Configuration

Parameter
Description
Data Source
Select the Oracle data source to synchronize
Source Table
According to business requirements, choose "All Database Tables", "Specified Table", or "Specified Database"
All Database Tables: Monitor all databases under the data source. Newly added databases and tables during task execution will be synchronized to the target by default
Specified Table: Synchronize only the specified table
Designated library: Monitor specific libraries and schemas, and synchronize all or rule-compliant tables under the schema
Read mode
Full + Incremental: Data synchronization is divided into full and incremental synchronization phases. After the full phase is completed, the task enters the incremental phase. The full phase will synchronize historical data in the database, and the incremental phase will start synchronizing from the binlog CDC location after the task starts.
Incremental: Sync data only from the binlog CDC locus after the task starts.
Consistency Semantics
It only represents the consistency semantics of the reading end. Supports At-least-once and Exactly-once.
At-least-once: Data may be read repeatedly, and relies on the target end to support deduplication to ensure data consistency. Suitable for scenarios with a large volume of data in the full stage and non-numeric primary keys, requiring high synchronization performance.
Exactly-once: Data is read strictly once, with some performance loss, and does not support tables without primary keys and unique index columns. Suitable for general scenarios where the source table has numeric primary keys or unique index columns.
The two modes are incompatible in the current version. If the mode is changed after task submission, stateful restart is not supported.
Advanced Settings (optional)
You can configure parameters based on business requirements

Real-Time Single Table Synchronization Read Configuration

1. On the DataInLong page, click Real-time synchronization in the left-hand navigation bar.
2. On the Real-time synchronization page, select Single Table Synchronization to create a new configuration (you can choose between Form mode and Canvas mode) and enter the configuration page.
Parameter
Description
Data Source
Select the data source where the table to be synced is located.
Database
Support selection or manual input of the library name to be read.
By default, the database bound to the data source is used as the default database. Other databases need to be manually entered.
If the data source network is not connected and the database information cannot be pulled directly, you can manually enter the database name. Data synchronization can still be performed when the Data Integration network is connected.
Schema
Support selection or manual input of the available Schemas under this data source.
Table
Support selecting or manually entering the Table Name to be read.
Read mode
Full + Incremental: Data synchronization is divided into full and incremental synchronization phases. After the full phase is completed, the task enters the incremental phase. The full phase will synchronize historical data in the database, and the incremental phase will start synchronizing from the binlog CDC location after the task starts.
Increment only: Only sync data starting from the binlog cdc point after the task starts.
Consistency Semantics
It only represents the consistency semantics of the reading end. Supports At-least-once and Exactly-once.
At-least-once: Data may be read repeatedly, and relies on the target end to support deduplication to ensure data consistency. Suitable for scenarios with a large volume of data in the full stage and non-numeric primary keys, requiring high synchronization performance.
Exactly-once: Data is read strictly once, with some performance loss, and does not support tables without primary keys and unique index columns. Suitable for general scenarios where the source table has numeric primary keys or unique index columns.
The two modes are incompatible in the current version. If the mode is changed after task submission, stateful restart is not supported.
Advanced Settings
(Optional)
You can configure parameters according to business needs.

Real-Time Single Table Synchronization Write Configuration

1. On the DataInLong page, click Real-time synchronization in the left-hand navigation bar.
2. On the Real-time synchronization page, select Single Table Synchronization to create a new configuration (you can choose between Form mode and Canvas mode) and enter the configuration page.
Parameter
Description
Data Source
The Oracle data source to be written to is required.
Database
Supports selection, or manual input of the library name to be written to
By default, the database bound to the data source is used as the default database. Other databases need to be manually entered.
If the data source network is not connected and the database information cannot be pulled directly, you can manually enter the database name. Data synchronization can still be performed when the Data Integration network is connected.
Schema
Supports selection or manual input of the Oracle data pattern to be written.
Table
Supports selecting or manually entering the name of the table to be written to.
If the data source network is not connected and the table information cannot be pulled directly, the table name can be manually entered. Data synchronization can still be performed when the Data Integration network is connected.
Primary key
Select a field as the primary key for the write table.
Advanced Settings (optional)
You can configure parameters according to business needs.

Supported Field Types

Read

The supported data types for reading in Oracle and their corresponding conversion relationships are as follows (When processing Oracle, it first maps the data types of the Oracle data source to the data types of the data processing engine):
Oracle Type
Internal Type
NUMBER(p, s <= 0), p - s < 3
TINYINT
NUMBER(p, s <= 0), p - s < 5
SMALLINT
NUMBER(p, s <= 0), p - s < 10
INT
NUMBER(p, s <= 0), p - s < 19
BIGINT
NUMBER(p, s <= 0), 19 <= p - s <= 38
DECIMAL(p - s, 0)
NUMBER(p, s > 0)
DECIMAL(p, s)
NUMBER(p, s <= 0), p - s > 38
STRING
FLOAT,BINARY_FLOAT
FLOAT
DOUBLE PRECISION,BINARY_DOUBLE
DOUBLE
NUMBER(1)
BOOLEAN
DATE,TIMESTAMP [(p)]
TIMESTAMP [(p)] [WITHOUT TIMEZONE]
TIMESTAMP [(p)] WITH TIME ZONE
TIMESTAMP [(p)] WITH TIME ZONE
TIMESTAMP [(p)] WITH LOCAL TIME ZONE
TIMESTAMP_LTZ [(p)]
CHAR(n),NCHAR(n),NVARCHAR2(n),VARCHAR(n),VARCHAR2(n),CLOB,NCLOB,XML Type
STRING
BLOB,ROWID
BYTES
INTERVAL DAY TO SECOND,INTERVAL YEAR TO MONTH
BIGINT

Write

The supported data types for writing in Oracle and their corresponding conversion relationships are as follows:
Internal Type
Oracle Type
FLOAT
BINARY_FLOAT
DOUBLE
BINARY_DOUBLE
DECIMAL(p, s)
SMALLINT,FLOAT(s),DOUBLE PRECISION,REAL,NUMBER(p, s)
DATE
DATE
DECIMAL(20, 0)
-
FLOAT
REAL,FLOAT4
DOUBLE
FLOAT8,DOUBLE PRECISION
DECIMAL(p, s)
NUMERIC(p, s),DECIMAL(p, s)
BOOLEAN
BOOLEAN
DATE
DATE
TIMESTAMP [(p)][WITHOUT TIMEZONE]
TIMESTAMP [(p)]WITHOUT TIMEZONE
STRING
CHAR(n),VARCHAR(n),CLOB(n)
BYTES
RAW(s),BLOB
ARRAY
-


Ajuda e Suporte

Esta página foi útil?

comentários