CREATE TABLE `random_source` (f_sequence INT,f_random INT,f_random_str VARCHAR) WITH ('connector' = 'datagen','rows-per-second'='10', -- Number of date rows generated per second'fields.f_sequence.kind'='random', -- Random number'fields.f_sequence.min'='1', -- Minimum sequential number'fields.f_sequence.max'='10', -- Maximum sequential number'fields.f_random.kind'='random', -- Random number'fields.f_random.min'='1', -- Minimum random number'fields.f_random.max'='100', -- Maximum random number'fields.f_random_str.length'='10' -- Random string length);
datagen is selected. Select a data source based on your actual business needs.-- Replace `<bucket name>` and `<folder name>` with your actual bucket and folder names.CREATE TABLE `cos_sink` (f_sequence INT,f_random INT,f_random_str VARCHAR) PARTITIONED BY (f_sequence) WITH ('connector' = 'filesystem','path'='cosn://<bucket name>/<folder name>/', --- Directory path to which data is to be written'format' = 'json', --- Format of written data'sink.rolling-policy.file-size' = '128MB', --- Maximum file size'sink.rolling-policy.rollover-interval' = '30 min', --- Maximum file write time'sink.partition-commit.delay' = '1 s', --- Partition commit delay'sink.partition-commit.policy.kind' = 'success-file' --- Partition commit method);
WITH parameters of a sink, see "Filesystem (HDFS/COS)".INSERT INTO `cos_sink`SELECT * FROM `random_source`;
flink-connector-cos as the Built-in Connector and configure the COS URL in Advanced Parameters as follows:fs.AbstractFileSystem.cosn.impl: org.apache.hadoop.fs.CosNfs.cosn.impl: org.apache.hadoop.fs.CosFileSystemfs.cosn.credentials.provider: org.apache.flink.fs.cos.OceanusCOSCredentialsProviderfs.cosn.bucket.region: <COS region>fs.cosn.userinfo.appid: <COS user appid>
<COS region> with your actual COS region, such as ap-guangzhou.<COS user appid> with your actual APPID, which can be viewed in Account Center.Feedback