tencent cloud

Setting the Historical Data Moderation Task
Last updated:2025-12-26 18:10:26
Setting the Historical Data Moderation Task
Last updated: 2025-12-26 18:10:26

Feature Overview

This article describes how to use the historical data audit feature for Content Moderation via the console. You can create historical data audit tasks to perform a one-time batch audit of your images, videos, audio, text, and document files.

Create Audit Task

2. In the left sidebar, click Bucket Management to go to the Bucket Management page.
3. Locate the Bucket you need to operate on, click its Bucket name to go to the Bucket.
4. In the left sidebar, choose Content Moderation > Historical Moderation, go to the Historical Data Moderation page.
5. Click Create Moderation Task.
6. In the Scan Configuration page, you can use different scanning methods to audit your files on demand.



Scan Scope: include three types: Bucket file list, COS inventory report, and URL list file.
Bucket File List: You can select files within the current Bucket for auditing. The scanning scope supports scanning by file upload time or by prefix.
COS Inventory report: You can choose to scan the inventory list generated by the COS Inventory feature and store the inventory list file in the current Bucket.
URL list file: You can choose to scan a specified URL list file, which currently supports the txt format with one URL per line.
Review Efficiency: includes two types: standard audit and off-peak audit.
Normal review: After the task is created, it directly starts the audit with high real-time performance and is charged at the standard audit rate.
Idle time auditing: This leverages idle resources in the background for auditing. It typically starts within 24 hours after task creation, with lower efficiency and real-time performance than standard audit but at a reduced cost. For pricing details, refer to Content Moderation Billing Items.
7. Click Next.
8. In the Moderation Policy page, set the audit policy, configure the corresponding audit file types and audit scenario types, then click Next.


Moderate Image:
Moderation Suffix: Support auditing images with suffixes such as jpg, jpeg, png, bmp, webp, gif, heif, heic, or using intelligent suffix determination.
Note:
Intelligent suffix determination can recognize the aforementioned 8 common image suffixes and some special suffixes.
Large Image Moderation: Image Moderation only supports images under 5MB. For images exceeding this size limit, you can enable the large image auditing feature. The system will first compress the image before auditing, supporting images up to 32MB after compression.
Note:
The Large Image Auditing feature will incur basic Image Processing Fees. For details on costs, see basic Image Processing Fees.
Select a Moderation Policy: Please select the audit policy you have configured (if you have not configured any, you can choose the system default policy). Different audit policies correspond to different policy categories. You can customize personalized scenario audits through custom policies. It supports auditing scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios. For how to configure an audit policy, see Set Moderation Policy.
Risk Library Associated: The policy scenario will display sample tags contained in the associated risk library. For example, if the risk library contains pornographic samples, you can select pornographic content in the policy scenario.
Moderation Scenarios: The audit scenarios displayed are the default scenarios or those configured in your audit policy. You can select the scenario categories you wish to audit.
Daily Audit Limit: The daily maximum number of images that can be audited can be set, with no limit by default.
Moderate Video:
Moderation Suffix: Support moderation videos with suffixes such as mp4, wmv, rmvb, flv, avi, m3u8, mov, m4v, mkv.
Select a Moderation Policy: Please select the moderation policy you have configured (if you have not configured any, you can choose the system default policy). Different moderation policies correspond to different policy categories. You can customize personalized scenario audits through custom policies. It supports auditing scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios. For how to configure an audit policy, see Set Moderation Policy.
Risk Library Associated: The policy scenario will display sample tags contained in the associated risk library. For example, if the risk library contains pornographic samples, you can select pornographic content in the policy scenario.
Moderation Scene: Supports moderating scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios.
Daily Moderation Limit: The daily maximum number of video files that can be audited can be set, with no limit by default.
Moderate: Supports auditing video visuals and audio.
Frame Capture Rule: MAS implements this feature by capturing video frames and auditing the captured images. It supports frame capture for auditing at fixed time intervals, fixed frame rates, and fixed quantities.
Fixed Time: Capture images at fixed time intervals for auditing. You can set the time interval and the maximum number of frames captured per video.
Fixed Frame Rate: Audit by capturing a fixed number of frames per second. You can set the number of frames captured per second and the maximum number of frames captured per video.
Fixed Quantity: Capture a fixed number of images at average intervals throughout the video for auditing. You can set the maximum number of frames captured per video.
Note:
The setting of frame capture rules will affect the audit results.
Moderate Audio:
Moderation Suffix: Support audio with suffixes such as mp3, wav, aac, flac, amr, 3gp, m4a, wma, ogg, and ape.
Select a Moderation Policy: Please select the audit policy you have configured (if you have not configured any, you can choose the system default policy). Different audit policies correspond to different policy categories. You can customize personalized scenario audits through custom policies. It supports auditing scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios. For how to configure an audit policy, see Set Moderation Policy.
Risk Library Associated: The policy scenario will display sample tags contained in the associated risk library. For example, if the risk library contains pornographic samples, you can select pornographic content in the policy scenario.
Moderation Scene: The audit categories displayed are those configured in your audit policy. You can select the scenario categories you wish to audit.
Daily Moderation Limit: The daily maximum number of audio files that can be audited can be set, with no limit by default.
Moderate Text
Moderation Suffix: Support auditing text with suffixes such as txt, html, or no suffix.
Select a Moderation Policy: Please select the audit policy you have configured (if you have not configured any, you can choose the system default policy). Different audit policies correspond to different policy categories. You can customize personalized scenario audits through custom policies. It supports auditing scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios. For how to configure an audit policy, see Set Moderation Policy.
Risk Library Associated: The policy scenario will display sample tags contained in the associated risk library. For example, if the risk library contains pornographic samples, you can select pornographic content in the policy scenario.
Moderation Scene: The audit categories displayed are those configured in your audit policy. You can select the scenario categories you wish to audit.
Daily Moderation Limit: The daily maximum number of texts that can be audited can be set, with no limit by default.
Moderate Document:
Moderation Suffix: Document formats support presentation files, text files, spreadsheet files, PDF, and so on, and support multiple selections.
Select a Moderation Policy: Please select the audit policy you have configured (if you have not configured any, you can choose the system default policy). Different audit policies correspond to different policy categories. You can customize personalized scenario audits through custom policies. It supports auditing scenarios such as pornography, illegal activities, and advertising. You can select one or more detection scenarios. For how to configure an audit policy, see Set Moderation Policy.
Risk Library Associated: The policy scenario will display sample tags contained in the associated risk library. For example, if the risk library contains pornographic samples, you can select pornographic content in the policy scenario.
Moderation Scene: This displays the scenarios configured in the audit policy you selected. You can customize and select the scenarios you wish to audit.
Daily Moderation Limit: The daily maximum number of documents that can be audited can be set, with no limit by default.
9. On the Block Policy page, configure the freeze policy and click Next.
Block Configuration: You can choose to enable this service. Once enabled, Cloud Infinite will be authorized to automatically freeze files of the corresponding type through machine review, thereby prohibiting public read access to detected non-compliant content. After the service is enabled, you need to select the document types to freeze and the freeze document score range.



Block Type: You can specify different business scenarios to select the document types to freeze and the freeze document score range (that is, an integer between 60-100, where a higher score indicates a more sensitive document).
Note:
Machine audit adopts a scoring mechanism, where each image is assigned a score of 0 - 100.
Determined Part refers to images that are definitively identified as sensitive or normal, with scores in the [0,60] and (90,100] ranges. In these intervals, we consider the image confidence level to be sufficiently clear and do not require manual user intervention.
Uncertain Part refers to suspected sensitive images. The system cannot clearly distinguish whether it is sensitive content, with scores in the (60,90] range. It is recommended that users specify a score threshold based on their business needs for audit intensity.
CDN cache purge after blocking: When this feature is enabled, freezing files in the COS origin storage will simultaneously refresh the corresponding CDN cached data. Cloud Infinite will invoke the CDN URL refresh interface to perform this refresh. Note that a daily quota limit applies to URL refreshes. For specific restrictions, refer to the CDN Refresh URL API.
Block mode: Currently supports the following two freeze methods.
Change the file ACL to private read: By modifying the file access permission to private read (private), the file freezing effect is achieved. When this method is used, accessing the file again will return a "403" status code, indicating no access permission to the file. For information about file permissions, see File ACL Overview.
Transfer the file to the backup directory: Achieve file freezing by moving files to the backup directory. When this method is used, accessing the file again will return a "404" status code, indicating the file does not exist. The backup directory is automatically generated by the backend and is located at: audit_freeze_backup/increment_audit under the current Bucket.
10. In the Moderation Result page, set the audit result callback and then click Next.
After Callback Settings are enabled, we will send audit results to your specified callback address. You need to select the callback Scenario and Callback Content, while setting up the Callback URL.


Callback Scenario: Based on your configured audit policy, options include pornography, illegal activities, advertising, and abusive language.
Callback Content: Options include callback only for non-compliant files, callback only for frozen files, or callback for all files. Re-auditing of failed audit files is supported.
Callback URL: The callback URL must return a 200 status code by default before it can be used.
Callback URL Protocol: HTTP or HTTPS can be enforced.
11. Confirm that the overall configuration is correct, then click Create to complete the task creation.

View Task Results

On the Historical Data Audit page, you can perform different operations based on the task status.



When the task status is Under Review, you can view Task Configuration or Terminate Task.
When the task status is Completed, you can view Moderation Details, view Result Statistics, or view Task Configuration.
View audit details: Only audit details from the last 1 month are supported. After clicking, you will be redirected to the audit page where you can export audit results, perform manual reviews, and so on. For specific operations, see Moderation Details.
Viewing result statistics: This page displays the statistical results of the audit task. If you have any questions about the audit results, you can go to the audit details page in the console to view specific audit content.
Viewing task configuration: This page displays the configuration information of the audit task, including scan configuration, audit policy, freeze policy, and audit results.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback