tencent cloud

Data Lake Compute

Release Notes
Product Introduction
Overview
Strengths
Use Cases
Purchase Guide
Billing Overview
Refund
Payment Overdue
Configuration Adjustment Fees
Getting Started
Complete Process for New User Activation
DLC Data Import Guide
Quick Start with Data Analytics in Data Lake Compute
Quick Start with Permission Management in Data Lake Compute
Quick Start with Partition Table
Enabling Data Optimization
Cross-Source Analysis of EMR Hive Data
Standard Engine Configuration Guide
Configuring Data Access Policy
Operation Guide
Console Operation Introduction
Development Guide
Runtime Environment
SparkJar Job Development Guide
PySpark Job Development Guide
Query Performance Optimization Guide
UDF Function Development Guide
System Restraints
Client Access
JDBC Access
TDLC Command Line Interface Tool Access
Third-party Software Linkage
Python Access
Practical Tutorial
Accessing DLC Data with Power BI
Table Creation Practice
Using Apache Airflow to Schedule DLC Engine to Submit Tasks
Direct Query of DLC Internal Storage with StarRocks
Spark cost optimization practice
DATA + AI
Using DLC to Analyze CLS Logs
Using Role SSO to Access DLC
Resource-Level Authentication Guide
Implementing Tencent Cloud TCHouse-D Read and Write Operations in DLC
DLC Native Table
SQL Statement
SuperSQL Statement
Overview of Standard Spark Statement
Overview of Standard Presto Statement
Reserved Words
API Documentation
History
Introduction
API Category
Making API Requests
Data Table APIs
Task APIs
Metadata APIs
Service Configuration APIs
Permission Management APIs
Database APIs
Data Source Connection APIs
Data Optimization APIs
Data Engine APIs
Resource Group for the Standard Engine APIs
Data Types
Error Codes
General Reference
Error Codes
Quotas and limits
Operation Guide on Connecting Third-Party Software to DLC
FAQs
FAQs on Permissions
FAQs on Engines
FAQs on Features
FAQs on Spark Jobs
DLC Policy
Privacy Policy
Data Privacy And Security Agreement
Service Level Agreement
Contact Us

ModifySparkApp

PDF
Focus Mode
Font Size
Last updated: 2025-11-13 20:53:18

1. API Description

Domain name for API request: dlc.intl.tencentcloudapi.com.

This API is used to update a Spark job.

A maximum of 20 requests can be initiated per second for this API.

We recommend you to use API Explorer
Try it
API Explorer provides a range of capabilities, including online call, signature authentication, SDK code generation, and API quick search. It enables you to view the request, response, and auto-generated examples.

2. Input Parameters

The following request parameter list only provides API request parameters and some common parameters. For the complete common parameter list, see Common Request Parameters.

Parameter Name Required Type Description
Action Yes String Common Params. The value used for this API: ModifySparkApp.
Version Yes String Common Params. The value used for this API: 2021-01-25.
Region Yes String Common Params. For more information, please see the list of regions supported by the product.
AppName Yes String The Spark job name.
AppType Yes Integer The Spark job type. Valid values: 1 for Spark JAR job and 2 for Spark streaming job.
DataEngine Yes String The data engine executing the Spark job.
AppFile Yes String The path of the Spark job package.
RoleArn Yes Integer The data access policy (CAM role arn).
AppDriverSize Yes String The driver size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
AppExecutorSize Yes String The executor size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
AppExecutorNums Yes Integer Number of Spark job executors
SparkAppId Yes String The Spark job ID.
Eni No String This field has been disused. Use the Datasource field instead.
IsLocal No String The source of the Spark job package. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
MainClass No String The main class of the Spark job.
AppConf No String Spark configurations separated by line break
IsLocalJars No String The source of the dependency JAR packages of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppJars No String The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.
IsLocalFiles No String The source of the dependency files of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppFiles No String The dependency files of the Spark job (files other than JAR and ZIP packages), separated by comma.
IsLocalPythonFiles No String The source of the PySpark dependencies. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppPythonFiles No String The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.
CmdArgs No String The input parameters of the Spark job, separated by comma.
MaxRetries No Integer The maximum number of retries, valid for Spark streaming tasks only.
DataSource No String Data source name
IsLocalArchives No String The source of the dependency archives of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
AppArchives No String The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.
SparkImage No String The Spark image version.
SparkImageVersion No String The Spark image version name.
AppExecutorMaxNumbers No Integer The specified executor count (max), which defaults to 1. This parameter applies if the "Dynamic" mode is selected. If the "Dynamic" mode is not selected, the executor count is equal to AppExecutorNums.
SessionId No String The associated Data Lake Compute query script.
IsInherit No Integer Whether to inherit the task resource configuration from the cluster configuration template. Valid values: 0 (default): No; 1: Yes.
IsSessionStarted No Boolean Whether to run the task with the session SQLs. Valid values: false for no and true for yes.

3. Output Parameters

Parameter Name Type Description
RequestId String The unique request ID, generated by the server, will be returned for every request (if the request fails to reach the server for other reasons, the request will not obtain a RequestId). RequestId is required for locating a problem.

4. Example

Example1 Updating a Spark job

This example shows you how to update a Spark job.

Input Example

POST / HTTP/1.1
Host: dlc.intl.tencentcloudapi.com
Content-Type: application/json
X-TC-Action: ModifySparkApp
<Common request parameters>

{
    "SparkAppId": "batch_sadfafd",
    "AppName": "spark-test",
    "AppType": 1,
    "DataEngine": "spark-engine",
    "Eni": "kafka-eni",
    "IsLocal": "cos",
    "AppFile": "test.jar",
    "RoleArn": 12,
    "MainClass": "com.test.WordCount",
    "AppConf": "spark-default.properties",
    "IsLocalJars": "cos",
    "AppJars": "com.test2.jar",
    "IsLocalFiles": "cos",
    "AppFiles": "spark-default.properties",
    "AppDriverSize": "small",
    "AppExecutorSize": "small",
    "AppExecutorNums": 1,
    "AppExecutorMaxNumbers": 1
}

Output Example

{
    "Response": {
        "RequestId": "2ae4707a-9f72-44aa-9fd4-65cb739d6301"
    }
}

5. Developer Resources

SDK

TencentCloud API 3.0 integrates SDKs that support various programming languages to make it easier for you to call APIs.

Command Line Interface

6. Error Code

The following only lists the error codes related to the API business logic. For other error codes, see Common Error Codes.

Error Code Description
FailedOperation The operation failed.
InternalError.InternalSystemException The business system is abnormal. Please try again or submit a ticket to contact us.
InvalidParameter.InvalidAppFileFormat The specified Spark task package file format does not match. Currently, only .jar or .py is supported.
InvalidParameter.InvalidDataEngineName The data engine name is invalid.
InvalidParameter.InvalidDriverSize The current DriverSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
InvalidParameter.InvalidExecutorSize The current ExecutorSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
InvalidParameter.InvalidFileCompressionFormat The specified file compression format is not compliant. Currently, only .tar.gz/.tar/.tgz is supported.
InvalidParameter.InvalidFilePathFormat The specified file path format is not compliant. Currently, only cosn:// or lakefs:// is supported.
InvalidParameter.SQLBase64DecodeFail Base64 parsing of the SQL script failed.
InvalidParameter.SparkJobNotFound The specified Spark task does not exist.
InvalidParameter.SparkJobOnlySupportSparkBatchEngine Spark tasks can only be run using the Spark job engine.
ResourceInsufficient.SparkJobInsufficientResources The specified spark job resources are insufficient. Please adjust the driver/executor specifications.
ResourceNotFound.DataEngineNotFound The specified engine does not exist.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback