tencent cloud

Stream Compute Service

Releases Notes and Announcements
Release Notes
Product Introduction
Overview
Strengths
Use Cases
Purchase Guide
Billing Overview
Billing Mode
Refund
Configuration Adjustments
Getting Started
Preparations
Creating a Private Cluster
Creating a SQL Job
Creating a JAR Job
Creating an ETL Job
Creating a Python Job
Operation Guide
Managing Jobs
Developing Jobs
Monitoring Jobs
Job Logs
Events and Diagnosis
Managing Metadata
Managing Checkpoints
Tuning Jobs
Managing Dependencies
Managing Clusters
Managing Permissions
SQL Developer Guide
Overview
Glossary and Data Types
DDL Statements
DML Statements
Merging MySQL CDC Sources
Connectors
SET Statement
Operators and Built-in Functions
Identifiers and Reserved Words
Python Developer Guide
ETL Developer Guide
Overview
Glossary
Connectors
FAQ
Contact Us

Job Types

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2023-11-08 10:15:00
You can log in to the Stream Compute Service console to create a job. At present, four job types are available for selection on the Create job page: SQL, JAR, ETL, and Python. You can select a job type based on your business needs and scenarios.

SQL job

Compared with other programming languages, SQL costs less in terms of studying it. SQL-based job development lowers the requirements for data developers to use Flink. A SQL job allows quick view of dynamic and static data in a stream. It is suitable for building powerful data conversion or analysis pipelines. In addition, SQL jobs apply identical semantics for both streaming and batch inputs, so that the same computing results can be obtained.

JAR job

A JAR job is developed based on the code of Flink DataStream API or Flink Table API. Those who develop a JAR job need to have knowledge of Java or Scala DataStream API. This job type is suitable for users who focus on the underlying part of stream computing and require high complexity. To develop a JAR job, you need to develop and compile the JAR package in the local system first.

ETL job

An extract, transform, and load (ETL) job enables you to collect data from various sources. It transforms the data, provides some additional information, and stores the results. An ETL job is easy to operate. It takes about 1 minute to create a lightweight ETL job. To start an ETL job, you even do not need to have the knowledge of programming languages, and you just need to select a data source table and a destination table and configure field mapping based on the business logic. The data in your business systems will be extracted, cleansed/transformed, and loaded into a data warehouse.

Python job

A Python job is developed based on the Python code. It requires the developers to have knowledge of Python and the libraries/packages supported by it. Compared with other programming languages, Python is easy to learn. Python specifies fewer conventions and special cases in its syntax. It focuses on what you want to accomplish with your code, but not rich language representations. It is relatively easy to learn and use. To develop a Python job, you need to first write your Python files and package them into a .zip file in the local system, and then upload the Python package before configuring the job in the console.

Ajuda e Suporte

Esta página foi útil?

comentários