tencent cloud

Stream Compute Service

Releases Notes and Announcements
Release Notes
Product Introduction
Overview
Strengths
Use Cases
Purchase Guide
Billing Overview
Billing Mode
Refund
Configuration Adjustments
Getting Started
Preparations
Creating a Private Cluster
Creating a SQL Job
Creating a JAR Job
Creating an ETL Job
Creating a Python Job
Operation Guide
Managing Jobs
Developing Jobs
Monitoring Jobs
Job Logs
Events and Diagnosis
Managing Metadata
Managing Checkpoints
Tuning Jobs
Managing Dependencies
Managing Clusters
Managing Permissions
SQL Developer Guide
Overview
Glossary and Data Types
DDL Statements
DML Statements
Merging MySQL CDC Sources
Connectors
SET Statement
Operators and Built-in Functions
Identifiers and Reserved Words
Python Developer Guide
ETL Developer Guide
Overview
Glossary
Connectors
FAQ
Contact Us

Managing Checkpoints

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2023-11-08 10:16:47

Viewing ‍checkpoint information

Log in to the Stream Compute Service console, select Jobs on the left sidebar, and click the Checkpoints tab of a job to view its checkpoints. The checkpoint list of the job is displayed there.
The checkpoint list provides the following information:
Checkpoint ID/description: The ID uniquely identifies the current checkpoint, and the description is the checkpoint information specified by you or automatically generated by the system.
Trigger time: The time when the checkpointing is triggered.
Completion time: The time when the checkpointing is completed.
Time: The time taken to perform checkpointing.
Status: The checkpoint status. Valid values: Creating, Present, Cleared, Timeout, Failed, and so on.
Source: The checkpoint source. Created during running means the checkpoint is manually taken by a user, while Created when the job is stopped ‍means the Create a checkpoint when stopping the job ‍option is selected and the checkpoint is taken.
Job version: The job configuration version to which the checkpoint corresponds.
Location: The storage address of the checkpoint, currently a COS path.
Note
Cleared means the checkpoint has been manually or automatically cleared from its COS path and is unavailable for job start.

Manually creating a checkpoint

You can manually create a checkpoint of a running job, which contains all the current state data of the job and can be used for job upgrade and testing. Steps are as follows: On the Checkpoints page of a job, click Trigger checkpoint, enter a description in the pop-up window, and click Confirm. Then, a checkpoint whose source is Created during running will appear in the checkpoint list. Please wait until its status changes from Running to Completed. A Completed ‍checkpoint can be used to recover the job state during job start.
Note
If the Checkpoints tab shows that the current cluster does not support checkpoints, submit a ticket to upgrade the cluster.

Recovering a job from checkpoint

When running a job, you can select Use a checkpoint to recover the state of the job. Specifically, you select a desired checkpoint and click Confirm.

Setting a checkpoint storage policy

By default, the latest checkpoints of a job are saved in Flink. For how to recover a job from checkpoint, see Setting a ‍checkpoint storage policy.
By default, the latest 5 checkpoints of a job are saved. You can adjust the number of checkpoints saved using state.checkpoints.num-retained in the advanced parameters.


도움말 및 지원

문제 해결에 도움이 되었나요?

피드백