tencent cloud

数据湖计算

产品动态
产品简介
产品概述
产品优势
应用场景
购买指南
计费概述
退费说明
欠费说明
调整配置费用说明
快速入门
新用户开通全流程
DLC 数据导入指引
一分钟入门 DLC 数据分析
一分钟入门 DLC 权限管理
一分钟入门分区表
开启数据优化
跨源分析 EMR Hive 数据
标准引擎配置指引
配置数据访问策略
操作指南
控制台操作介绍
开发指南
运行环境
SparkJar 作业开发指南
PySpark 作业开发指南
查询性能优化指南
UDF 函数开发指南
系统约束
客户端访问
JDBC 访问
TDLC 命令行工具访问
第三方软件联动
Python 访问
实践教程
通过 Power BI 访问 DLC 数据操作指南
建表实践
使用 Apache Airflow 调度 DLC 引擎提交任务
StarRocks 直接查询 DLC 内部存储
Spark 计算成本优化实践
DATA + AI
使用 DLC 分析 CLS 日志
使用角色 SSO 访问 DLC
资源级鉴权指南
在 DLC 中实现 TCHouse-D 读写操作
DLC 原生表
SQL 语法
SuperSQL 语法
标准 Spark 语法概览
标准 Presto 语法概览
保留字
API 文档
History
Introduction
API Category
Making API Requests
Data Table APIs
Task APIs
Metadata APIs
Service Configuration APIs
Permission Management APIs
Database APIs
Data Source Connection APIs
Data Optimization APIs
Data Engine APIs
Resource Group for the Standard Engine APIs
Data Types
Error Codes
通用类参考
错误码
配额与限制
第三方软件连接DLC操作指南
常见问题
权限类常见问题
引擎类常见问题
功能类常见问题
Spark 作业类常见问题
DLC 政策
隐私协议
数据处理和安全协议
服务等级协议
联系我们

CreateSparkApp

PDF
聚焦模式
字号
最后更新时间: 2025-11-13 20:53:25

1. API Description

Domain name for API request: dlc.intl.tencentcloudapi.com.

This API is used to create a Spark job.

A maximum of 20 requests can be initiated per second for this API.

We recommend you to use API Explorer
Try it
API Explorer provides a range of capabilities, including online call, signature authentication, SDK code generation, and API quick search. It enables you to view the request, response, and auto-generated examples.

2. Input Parameters

The following request parameter list only provides API request parameters and some common parameters. For the complete common parameter list, see Common Request Parameters.

Parameter Name Required Type Description
Action Yes String Common Params. The value used for this API: CreateSparkApp.
Version Yes String Common Params. The value used for this API: 2021-01-25.
Region Yes String Common Params. For more information, please see the list of regions supported by the product.
AppName Yes String The Spark job name.
AppType Yes Integer The Spark job type. Valid values: 1 for Spark JAR job and 2 for Spark streaming job.
DataEngine Yes String The data engine executing the Spark job.
AppFile Yes String The path of the Spark job package.
RoleArn Yes Integer Data visiting policy achieved through CAM Role arn; the console can obtain it through Data Job -> Job Configuration; SDK can obtain corresponding values through the DescribeUserRoles API.
AppDriverSize Yes String The driver size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
AppExecutorSize Yes String The executor size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
AppExecutorNums Yes Integer Number of Spark job executors
Eni No String This field has been disused. Use the Datasource field instead.
IsLocal No String The source of the Spark job package. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
MainClass No String The main class of the Spark job.
AppConf No String Spark configurations separated by line break
IsLocalJars No String The source of the dependency JAR packages of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppJars No String The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.
IsLocalFiles No String The source of the dependency files of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppFiles No String The dependency files of the Spark job (files other than JAR and ZIP packages) separated by comma.
CmdArgs No String The input parameters of the Spark job, separated by comma.
MaxRetries No Integer The maximum number of retries, valid for Spark streaming tasks only.
DataSource No String The data source name.
IsLocalPythonFiles No String The source of the PySpark dependencies. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppPythonFiles No String The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.
IsLocalArchives No String The source of the dependency archives of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppArchives No String The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.
SparkImage No String The Spark image version.
SparkImageVersion No String The Spark image version name.
AppExecutorMaxNumbers No Integer The specified executor count (max), which defaults to 1. This parameter applies if the "Dynamic" mode is selected. If the "Dynamic" mode is not selected, the executor count is equal to AppExecutorNums.
SessionId No String The ID of the associated Data Lake Compute query script.
IsInherit No Integer Whether to inherit the task resource configuration from the cluster template. Valid values: 0 (default): No; 1: Yes.
IsSessionStarted No Boolean Whether to run the task with the session SQLs. Valid values: false for no and true for yes.

3. Output Parameters

Parameter Name Type Description
SparkAppId String The unique ID of the application.
Note: This field may return null, indicating that no valid values can be obtained.
RequestId String The unique request ID, generated by the server, will be returned for every request (if the request fails to reach the server for other reasons, the request will not obtain a RequestId). RequestId is required for locating a problem.

4. Example

Example1 Creating a Spark job

This example shows you how to create a Spark job.

Input Example

POST / HTTP/1.1
Host: dlc.intl.tencentcloudapi.com
Content-Type: application/json
X-TC-Action: CreateSparkApp
<Common request parameters>

{
    "AppName": "spark-test",
    "AppType": 1,
    "DataEngine": "spark-engine",
    "Eni": "kafka-eni",
    "IsLocal": "cos",
    "AppFile": "test.jar",
    "RoleArn": 12,
    "MainClass": "com.test.WordCount",
    "AppConf": "spark-default.properties",
    "IsLocalJars": "cos",
    "AppJars": "com.test2.jar",
    "IsLocalFiles": "cos",
    "AppFiles": "spark-default.properties",
    "AppDriverSize": "small",
    "AppExecutorSize": "small",
    "AppExecutorNums": 1,
    "AppExecutorMaxNumbers": 1
}

Output Example

{
    "Response": {
        "SparkAppId": "2aedsa7a-9f72-44aa-9fd4-65cb739d6301",
        "RequestId": "2ae4707a-9f72-44aa-9fd4-65cb739d6301"
    }
}

5. Developer Resources

SDK

TencentCloud API 3.0 integrates SDKs that support various programming languages to make it easier for you to call APIs.

Command Line Interface

6. Error Code

The following only lists the error codes related to the API business logic. For other error codes, see Common Error Codes.

Error Code Description
FailedOperation The operation failed.
InternalError.InternalSystemException The business system is abnormal. Please try again or submit a ticket to contact us.
InvalidParameter.InvalidAppFileFormat The specified Spark task package file format does not match. Currently, only .jar or .py is supported.
InvalidParameter.InvalidDriverSize The current DriverSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
InvalidParameter.InvalidExecutorSize The current ExecutorSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
InvalidParameter.InvalidFilePathFormat The specified file path format is not compliant. Currently, only cosn:// or lakefs:// is supported.
InvalidParameter.InvalidRoleArn The CAM role arn is invalid.
InvalidParameter.SparkJobNotUnique The specified Spark task already exists.
InvalidParameter.SparkJobOnlySupportSparkBatchEngine Spark tasks can only be run using the Spark job engine.
ResourceNotFound.DataEngineNotFound The specified engine does not exist.
ResourceNotFound.SessionInsufficientResources There are currently no resources to create a session. Please try again later or use an annual or monthly subscription cluster.

帮助和支持

本页内容是否解决了您的问题?

填写满意度调查问卷,共创更好文档体验。

文档反馈