tencent cloud

Image Moderation
Last updated: 2023-04-17 15:01:47
Image Moderation
Last updated: 2023-04-17 15:01:47
VOD's image moderation feature detects non-compliant information in images with the help of AI. The moderation results generated include a score and a suggestion. You can decide whether to publish an image based on the results. This helps you avoid potential legal risks and damage to your brand's reputation.
VOD can moderate images as well as the text in images. The supported moderation labels include porn, terrorism, politically sensitive, illegal, abuse, and ads.
Content Type
Moderation Label
Images
Pornographic (Porn)
Terrorist (Terror)
Politically sensitive (Polity)
Ads (Ad)
Illegal (Illegal)
Text in images (OCR)
Pornographic (Porn)
Terrorist (Terror)
Politically sensitive (Polity)
Ads (Ad)
Illegal (Illegal)
Abuse (Abuse)
The moderation results include the following fields:
Field
Type
Description
Confidence
Float
The moderation score (0-100). The higher the score, the more likely the content is non-compliant.
Suggestion
String
The suggestion. Valid values: pass, review, block.
pass: The probability of the content being non-compliant is low. We recommend you allow the content to pass.
review: The probability of the content being non-compliant is high. Manual verification is recommended.
block: There’s a high chance that the content is non-compliant. We recommend you block the content.

Initiating a Moderation Task

You can start an image moderation task either via the console or by calling a server API.

Obtaining the Result

Moderation results are returned immediately, regardless of how you start a task. If a task is created in the console, get the result in the console. If a task is created using the ReviewImage API, the result will be returned by the API. For the structure of the data returned, see Review Image - 3. Output Parameters.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback