tencent cloud

Data Lake Compute

製品概要
プロダクト概要
製品の強み
適用シーン
購入ガイド
課金概要
返金説明
支払い期限切れ説明
設定調整費用の説明
クイックスタート
新規ユーザー開通フルプロセス
DLC データインポートガイド
DLCデータ分析の1分間入門
DLC権限管理の1分間入門
パーティションテーブル1分間入門
データ最適化をオンにする
クロスソース分析 EMR Hive データ
標準エンジン構成ガイド
操作ガイド
コンソール操作紹介
開発ガイド
実行環境
SparkJar ジョブ開発ガイド
PySparkジョブ開発ガイド
「クエリパフォーマンス最適化ガイド」
UDF 関数開発ガイド
システム制約
クライアントアクセス
JDBCアクセス
TDLC コマンドラインツールにアクセス
サードパーティソフトウェア連携
Python にアクセス
実践チュートリアル
DLC を Power BI に接続
テーブル作成の実践
Apache Airflowを使用してDLCエンジンのタスクをスケジュールして送信する
StarRocks は DLC 内部ストレージを直接クエリします
Spark の計算コスト最適化プラクティス
DATA + AI
ロールSSOを使用してDLCにアクセスする
SQL構文
SuperSQL構文
標準 Spark 構文概要
標準 Presto 構文の概要
予約語
API Documentation
History
Introduction
API Category
Making API Requests
Data Table APIs
Task APIs
Metadata APIs
Service Configuration APIs
Permission Management APIs
Database APIs
Data Source Connection APIs
Data Optimization APIs
Data Engine APIs
Resource Group for the Standard Engine APIs
Data Types
Error Codes
一般クラスリファレンス
エラーコード
クォータと制限
サードパーティソフトウェアでDLCに接続する操作ガイド
よくあるご質問
権限に関するよくあるご質問
エンジン類のよくある質問
機能に関するよくあるご質問
Sparkジョブクラスに関するよくある質問
DLC ポリシー
プライバシーポリシー
データプライバシーとセキュリティ契約
お問い合わせ

DescribeSparkAppTasks

フォーカスモード
フォントサイズ
最終更新日: 2025-11-13 20:53:20

1. API Description

Domain name for API request: dlc.intl.tencentcloudapi.com.

This API is used to query the list of running task instances of a Spark job.

A maximum of 40 requests can be initiated per second for this API.

We recommend you to use API Explorer
Try it
API Explorer provides a range of capabilities, including online call, signature authentication, SDK code generation, and API quick search. It enables you to view the request, response, and auto-generated examples.

2. Input Parameters

The following request parameter list only provides API request parameters and some common parameters. For the complete common parameter list, see Common Request Parameters.

Parameter Name Required Type Description
Action Yes String Common Params. The value used for this API: DescribeSparkAppTasks.
Version Yes String Common Params. The value used for this API: 2021-01-25.
Region Yes String Common Params. For more information, please see the list of regions supported by the product.
JobId Yes String Spark job ID
Offset No Integer Paginated query offset
Limit No Integer Paginated query limit
TaskId No String Execution instance ID
StartTime No String The update start time in the format of yyyy-MM-dd HH:mm:ss.
EndTime No String The update end time in the format of yyyy-MM-dd HH:mm:ss.
Filters.N No Array of Filter Filter by this parameter, which can be task-state.

3. Output Parameters

Parameter Name Type Description
Tasks TaskResponseInfo Task result (this field has been disused)
Note: This field may return null, indicating that no valid values can be obtained.
TotalCount Integer Total number of tasks
SparkAppTasks Array of TaskResponseInfo List of task results
Note: This field may return null, indicating that no valid values can be obtained.
RequestId String The unique request ID, generated by the server, will be returned for every request (if the request fails to reach the server for other reasons, the request will not obtain a RequestId). RequestId is required for locating a problem.

4. Example

Example1 Querying the Running Task List of a Spark Job

This example shows you how to query the list of running tasks of a Spark job.

Input Example

POST / HTTP/1.1
Host: dlc.intl.tencentcloudapi.com
Content-Type: application/json
X-TC-Action:DescribeSparkAppTasks
<Common request parameters>

{
    "JobId": "batch_133e005d-6486-4517-8ea7-b6b97b183a6b",
    "Offset": 0,
    "Limit": 10
}

Output Example

{
    "Response": {
        "Tasks": {
            "DatabaseName": "abc",
            "DataAmount": 0,
            "Id": "abc",
            "UsedTime": 0,
            "OutputPath": "abc",
            "CreateTime": "abc",
            "State": 0,
            "SQLType": "abc",
            "SQL": "abc",
            "ResultExpired": true,
            "RowAffectInfo": "abc",
            "DataSet": "abc",
            "Error": "abc",
            "Percentage": 0,
            "OutputMessage": "abc",
            "TaskType": "abc",
            "ProgressDetail": "abc",
            "UpdateTime": "abc",
            "DataEngineId": "abc",
            "OperateUin": "abc",
            "DataEngineName": "abc",
            "InputType": "abc",
            "InputConf": "abc",
            "DataNumber": 0,
            "CanDownload": true,
            "UserAlias": "abc",
            "SparkJobName": "abc",
            "SparkJobId": "abc",
            "SparkJobFile": "abc",
            "UiUrl": "abc",
            "TotalTime": 0,
            "CmdArgs": "abc",
            "ImageVersion": "abc",
            "DriverSize": "abc",
            "ExecutorSize": "abc",
            "ExecutorNums": 1,
            "ExecutorMaxNumbers": 1
        },
        "TotalCount": 0,
        "SparkAppTasks": [
            {
                "DatabaseName": "abc",
                "DataAmount": 0,
                "Id": "abc",
                "UsedTime": 0,
                "OutputPath": "abc",
                "CreateTime": "abc",
                "State": 0,
                "SQLType": "abc",
                "SQL": "abc",
                "ResultExpired": true,
                "RowAffectInfo": "abc",
                "DataSet": "abc",
                "Error": "abc",
                "Percentage": 0,
                "OutputMessage": "abc",
                "TaskType": "abc",
                "ProgressDetail": "abc",
                "UpdateTime": "abc",
                "DataEngineId": "abc",
                "OperateUin": "abc",
                "DataEngineName": "abc",
                "InputType": "abc",
                "InputConf": "abc",
                "DataNumber": 0,
                "CanDownload": true,
                "UserAlias": "abc",
                "SparkJobName": "abc",
                "SparkJobId": "abc",
                "SparkJobFile": "abc",
                "UiUrl": "abc",
                "TotalTime": 0,
                "CmdArgs": "abc",
                "ImageVersion": "abc",
                "DriverSize": "abc",
                "ExecutorSize": "abc",
                "ExecutorNums": 1,
                "ExecutorMaxNumbers": 1
            }
        ],
        "RequestId": "abc"
    }
}

5. Developer Resources

SDK

TencentCloud API 3.0 integrates SDKs that support various programming languages to make it easier for you to call APIs.

Command Line Interface

6. Error Code

The following only lists the error codes related to the API business logic. For other error codes, see Common Error Codes.

Error Code Description
FailedOperation The operation failed.
InternalError.InternalSystemException The business system is abnormal. Please try again or submit a ticket to contact us.
InvalidParameter.FiltersValuesNumberOutOfLimit The number of specified Filter.Values parameters exceeds the limit. Currently, it should be less than or equal to 50.
InvalidParameter.InvalidTimeFormat The specified time format is not compliant. Currently, only YYYY-mm-dd HH:MM:SS is supported.
InvalidParameter.InvalidTimeParameter The date parameters are abnormal. For example, the end time is earlier than the start time.
InvalidParameter.ParameterNotFoundOrBeNone The parameter is not found or empty.
InvalidParameter.SparkJobFiltersKeyTypeNotMath The specified Spark task Filter.Key does not match. Currently, only spark-app-type/user-name/spark-job-name/spark-job-id/key-word is supported.
InvalidParameter.SparkJobNotFound The specified Spark task does not exist.
InvalidParameter.SparkJobNotUnique The specified Spark task already exists.
ResourceUnavailable.WhiteListFunction Currently, the allowlist function is available. Please contact us to activate it.

ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック