tencent cloud

Cloud Object Storage

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Features
Use Cases
Strengths
Concepts
Regions and Access Endpoints
Specifications and Limits
Service Regions and Service Providers
Billing
Billing Overview
Billing Method
Billable Items
Free Tier
Billing Examples
Viewing and Downloading Bill
Payment Overdue
FAQs
Getting Started
Console
Getting Started with COSBrowser
User Guide
Creating Request
Bucket
Object
Data Management
Batch Operation
Global Acceleration
Monitoring and Alarms
Operations Center
Data Processing
Content Moderation
Smart Toolbox
Data Processing Workflow
Application Integration
User Tools
Tool Overview
Installation and Configuration of Environment
COSBrowser
COSCLI (Beta)
COSCMD
COS Migration
FTP Server
Hadoop
COSDistCp
HDFS TO COS
GooseFS-Lite
Online Tools
Diagnostic Tool
Use Cases
Overview
Access Control and Permission Management
Performance Optimization
Accessing COS with AWS S3 SDK
Data Disaster Recovery and Backup
Domain Name Management Practice
Image Processing
Audio/Video Practices
Workflow
Direct Data Upload
Content Moderation
Data Security
Data Verification
Big Data Practice
COS Cost Optimization Solutions
Using COS in the Third-party Applications
Migration Guide
Migrating Local Data to COS
Migrating Data from Third-Party Cloud Storage Service to COS
Migrating Data from URL to COS
Migrating Data Within COS
Migrating Data Between HDFS and COS
Data Lake Storage
Cloud Native Datalake Storage
Metadata Accelerator
GooseFS
Data Processing
Data Processing Overview
Image Processing
Media Processing
Content Moderation
File Processing Service
File Preview
Troubleshooting
Obtaining RequestId
Slow Upload over Public Network
403 Error for COS Access
Resource Access Error
POST Object Common Exceptions
API Documentation
Introduction
Common Request Headers
Common Response Headers
Error Codes
Request Signature
Action List
Service APIs
Bucket APIs
Object APIs
Batch Operation APIs
Data Processing APIs
Job and Workflow
Content Moderation APIs
Cloud Antivirus API
SDK Documentation
SDK Overview
Preparations
Android SDK
C SDK
C++ SDK
.NET(C#) SDK
Flutter SDK
Go SDK
iOS SDK
Java SDK
JavaScript SDK
Node.js SDK
PHP SDK
Python SDK
React Native SDK
Mini Program SDK
Error Codes
Harmony SDK
Endpoint SDK Quality Optimization
Security and Compliance
Data Disaster Recovery
Data Security
Cloud Access Management
FAQs
Popular Questions
General
Billing
Domain Name Compliance Issues
Bucket Configuration
Domain Names and CDN
Object Operations
Logging and Monitoring
Permission Management
Data Processing
Data Security
Pre-signed URL Issues
SDKs
Tools
APIs
Agreements
Service Level Agreement
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Synchronized Moderation of Text Content

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2026-01-19 18:04:46

Feature Overview

This API is designed for the direct moderation of text content, operating on a synchronous request basis.
The following lists the characteristics of this API:
Note:
Plaintext needs to be Base64-encoded (UTF8 encoding only) before direct moderation is implemented.
Recognition of multiple non-compliant scenes is supported, including pornographic, illegal, and advertising scenes.
Setting Moderation Policy supports diverse business scenarios.

Authorization Guidelines

In the authorization policy, set action to ci:CreateAuditingTextJob. View all actions.

Service Activation

Cloud Infinite (CI) needs to be enabled with buckets bound before you use this function. For details, see Bind Bucket.

Prerequisites

Before using this API, check all relevant limitations. For details, see Specifications and Limits.

Pricing

Each moderation scene is billed separately. For example, if you select moderation for both pornographic and advertising scenes, moderating one piece of text will result in charges for two moderation actions.
Length limit: Plaintext needs to be Base64-encoded before direct moderation is implemented. The original text length before encoding must not exceed 10,000 UTF8-encoded characters.
Language support in text moderation: Currently, the system supports detection in Chinese, mixed Chinese and English, and Arabic numerals.
Default concurrency for the API: 100.


SDK Recommendation

The CI SDK offers complete demo, automated integration, signature calculations, and other capabilities. The SDK allows for a more convenient and swift way to call APIs. Click here to view SDK documentation.

Request

Request Example

POST /text/auditing HTTP/1.1
Host: <BucketName-APPID>.ci.<Region>.myqcloud.com
Date: <GMT Date>
Authorization: <Auth String>
Content-Length: <length>
Content-Type: application/xml

<body>
Note:
Authorization: Auth String (For more information, see Request Signature.)
When it is used through a sub-account, related permissions need to be assigned. For details, see Authorization Granularity Details.

Request Header

This API only uses common request headers. For details, see Common Request Headers.

Request Body

The implementation of this request action requires the following request body:
<Request>
<Input>
<Content></Content>
<DataId></DataId>
</Input>
<Conf>
<Callback></Callback>
<CallbackVersion></CallbackVersion>
<CallbackType></CallbackType>
<BizType></BizType>
<Freeze>
<PornScore></PornScore>
</Freeze>
</Conf>
</Request>
The following details the data description:
Node Name (Keyword)
Parent Node
Description
Type
Required or Not
Request
No
The specific configuration item for text moderation.
Container
Yes
The data description for Request of the Container type is as follows:
Node Name (Keyword)
Parent Node
Description
Type
Required or Not
Input
Request
Content requiring moderation.
Container
Yes
Conf
Request
Configuration of moderation rules.
Container
Yes
The data description for Input of the Container type is as follows:
Node Name (Keyword)
Parent Node
Description
Type
Required or Not
Content
Request.Input
When the input is plaintext, it should first be Base64-encoded. The text length before encoding cannot exceed 10,000 UTF-8 encoded characters. If this length limit is exceeded, the API will return an error.Note: Currently, detection and moderation are supported in Chinese, mixed Chinese and English, and Arabic numerals. If you need to implement moderation in other languages, please contact us.
String
No
DataId
Request.Input
This field in the moderation result will return the original content, with a length limit of 512 bytes. You can use this field to uniquely identify the data to be moderated.
String
No
UserInfo
Request.Input
User business field.
Container
No
UserInfo content of the Container node:
Node Name (Keyword)
Description
Type
Required or Not
TokenId
It usually indicates account information, limited to 128 bytes in length.
String
No
Nickname
It usually indicates nickname information, limited to 128 bytes in length.
String
No
DeviceId
It usually indicates device information, limited to 128 bytes in length.
String
No
AppId
It usually indicates a unique identifier for the App, limited to 128 bytes in length.
String
No
Room
It usually indicates room number information, limited to 128 bytes in length.
String
No
IP
It usually indicates IP address information, limited to 128 bytes in length.
String
No
Type
It usually indicates the business type, limited to 128 bytes in length.
String
No
ReceiveTokenId
It usually indicates the user account for receiving messages, limited to 128 bytes in length.
String
No
Gender
It usually indicates gender data, limited to 128 bytes in length.
String
No
Level
It usually indicates level information, limited to 128 bytes in length.
String
No
Role
It usually indicates role details, limited to 128 bytes in length.
String
No
The data description for Conf in the Container type is as follows:
Node Name (Keyword)
Parent Node
Description
Type
Required or Not
BizType
Request.Conf
It indicates a unique identifier for the moderation policy. You can set the scenes you want to moderate, such as pornography, advertising, or illegal content, through the moderation policy page on the console. Follow these guidelines to configure: Setting Moderation Policy. BizType can be obtained from the console. When you specify BizType, this moderation request will enable moderation according to the scenes configured in the policy. If BizType is left blank, the default moderation policy will be applied automatically.
String
No
Callback
Request.Conf
The moderation results can be sent to your callback address in a callback form. It supports addresses starting with http:// or https://, such as: http://www.callback.com. When Input is set to Content, this parameter is not effective, and results will be returned directly.
String
No
CallbackVersion
Request.Conf
The structure of the callback content. Valid values: Simple (the callback content includes basic information), Detail (the callback content includes detailed information). The default value is Simple.
String
No
CallbackType
Request.Conf
The type of callback segment. Valid values: 1 (all text segments are recalled), 2 (non-compliant text segments are recalled). The default value is 1.
Integer
No
Freeze
Request.Conf
This field allows you to automatically block text files based on the different scores given according to the moderation results. It is effective only when the text moderated in input is object.
Container
No
The data description for Freeze of the Container type is as follows:
Node Name (Keyword)
Parent Node
Description
Type
Required or Not
PornScore
Request.Conf.Freeze
The value range is [0,100]. A block operation will be carried out automatically when the pornographic content moderation score equals or exceeds the given score. If it is left blank, an automatic block operation will not occur. The default value is null.
Integer
No
AdsScore
Request.Conf.Freeze
The value range is [0,100]. A block operation will be carried out automatically when the advertising moderation score equals or exceeds the specified score. If it is left blank, an automatic block operation will not occur. The default value is null.
Integer
No
IllegalScore
Request.Conf.Freeze
The value range is [0,100]. A block operation will be carried out automatically if the illegal content moderation result equals or exceeds this score. If it is left blank, an automatic block operation will not occur. The default value is null.
Integer
No
AbuseScore
Request.Conf.Freeze
The value range is [0,100]. A block operation will be carried out automatically if the verbal abuse moderation result equals or exceeds this score. If it is left blank, an automatic block operation will not occur. The default value is null.
Integer
No
For block parameters in other moderation scenarios, please contact our service team.

Response

Response Header

This API only returns common response headers. For more information, see Common Response Headers.

Response Body

The response body returns application/xml data. The following displays complete node data:
<Response>
<JobsDetail>
<DataId></DataId>
<JobId></JobId>
<State></State>
<CreationTime></CreationTime>
<Code>Success</Code>
<Message>Success</Message>
<SectionCount></SectionCount>
<Result>1</Result>
<ContextText></ContextText>
<PornInfo>
<HitFlag></HitFlag>
<Count></Count>
</PornInfo>
<Section>
<StartByte></StartByte>
<PornInfo>
<HitFlag></HitFlag>
<Score></Score>
<Keywords></Keywords>
</PornInfo>
</Section>
</JobsDetail>
<RequestId></RequestId>
</Response>
The specific data content is as follows:
Node Name (Keyword)
Parent Node
Description
Type
Response
No
The specific response content returned by text moderation.
Container
`Response` content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
JobsDetail
Response
Detailed information of text moderation.
Container
RequestId
Response
When a request is send, the server automatically creates an ID specific to the request, assisting in locating encountered problems.
String
JobsDetail contents of the Container type:
Node Name (Keyword)
Parent Node
Description
Type
Code
Response.JobsDetail
Error code. It is returned only if State is Failed. For more information, see Error Codes.
String
DataId
Response.JobsDetail
Unique business identifier added in the request.
String
Message
Response.JobsDetail
Error description. It is returned only when State is Failed.
String
JobId
Response.JobsDetail
Task ID of this text moderation operation.
String
CreationTime
Response.JobsDetail
Creation time of the text moderation.
String
State
Response.JobsDetail
Status of the text moderation. Valid values include Success (moderation succeeded) and Failed (moderation failed).
String
Content
Response.JobsDetail
The submitted text moderation content, which is returned in Base64 encoding.
String
ContextText
Response.JobsDetail
When the text context correlation moderation capability is activated, this field, along with the current text under moderation and its associated text, will be returned in their original forms.Note: When you use the context correlation capabilities, it is necessary to incorporate the UserInfo field during the initiation of a text moderation operation. It indicates that the correlated context is specific to a particular user ID, and the text content uploaded by different user IDs will not be correlated.To enable context correlation capabilities, please contact our service team for activation.
String
Label
Response.JobsDetail
This field returns the moderation results which correspond to the malicious tag with the highest priority and are recommended by the model. It is recommended that you handle different types of violations and suggested values based on the business requirements. Returned values include: Normal: normal, Porn: pornography, Ads: advertising, along with other types of unsafe or inappropriate content.
String
Result
Response.JobsDetail
This field indicates the moderation result of the current assessment. You can perform subsequent operations based on the results. Valid values: 0 (normal), 1 (sensitive and non-compliant files), and 2 (possibly sensitive, with manual moderation recommended).
Integer
SectionCount
Response.JobsDetail
The number of content segments for text moderation. The value is fixed at 1.
Integer
PornInfo
Response.JobsDetail
The moderation result of the pornographic content moderation scene.
Container
AdsInfo
Response.JobsDetail
The moderation result of the advertising content moderation scene.
Container
IllegalInfo
Response.JobsDetail
The moderation result of the illegal content moderation scene.
Container
AbuseInfo
Response.JobsDetail
The moderation result of the abusive content moderation scene.
Container
Section
Response.JobsDetail
The specific result information of text moderation.
Container Array
UserInfo
Response.JobsDetail
User business field.
Container
ListInfo
Response.JobsDetail
Account whitelist/blacklist results.
Container
PornInfo, AdsInfo, IllegalInfo, and AbuseInfo content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
HitFlag
Response.JobsDetail.*Info
It is used to return moderation results of the corresponding scene. Returned values: 0: normal, 1: confirmed as violation content of the current scene, 2: suspected as violation content of the current scene.
Integer
Count
Response.JobsDetail.*Info
The number of segments that match the moderation classification.
Integer
Section Content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
StartByte
Response.JobsDetail.Section
The location within the text where the segment begins (that is, 10 represents the 11th UTF-8 character). It starts from 0.
Integer
Label
Response.JobsDetail.Section
This field returns the moderation results which correspond to the malicious tag with the highest priority and are recommended by the model. It is recommended that you handle different types of violations and suggested values based on the business requirements. Returned values include: Normal: normal, Porn: pornography, Ads: advertising, along with other types of unsafe or inappropriate content.
String
Result
Response.JobsDetail.Section
This field indicates the moderation result of the current assessment. You can perform subsequent operations based on the results. Valid values: 0 (normal), 1 (sensitive and non-compliant files), and 2 (possibly sensitive, with manual moderation recommended).
Integer
PornInfo
Response.JobsDetail.Section
The moderation result of the pornographic content moderation scene.
Container
AdsInfo
Response.JobsDetail.Section
The moderation result of the advertising content moderation scene.
Container
IllegalInfo
Response.JobsDetail.Section
The moderation result of the illegal content moderation scene.
Container
AbuseInfo
Response.JobsDetail.Section
The moderation result of the abusive content moderation scene.
Container
PornInfo, AdsInfo, IllegalInfo, and AbuseInfo content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
HitFlag
Response.JobsDetail.Section.*Info
It is used to return moderation results of the corresponding scene. Returned values: 0: normal, 1: confirmed as violation content of the current scene, 2: suspected as violation content of the current scene.
Integer
Score
Response.JobsDetail.Section.*Info
The moderation result score within this segment. Higher scores indicate more sensitive content.
Integer
Keywords
Response.JobsDetail.Section.*Info
Keywords matched under the current moderation scene. Multiple keywords are separated by ",".
String
LibResults
Response.JobsDetail.Section.*Info
This field is used to return results identified through the risk library. Note: This field is not returned when no samples within the risk library have been matched.
Container Array
SubLabel
Response.JobsDetail.Section.*Info
This field indicates the specific sub-tags matched in the moderation. For example: the SexBehavior sub-tag under Porn. Note: This field may return null, indicating that no specific sub-tags are matched.
String
`LibResults` content of the `Container` node:
Node Name (Keyword)
Parent Node
Description
Type
LibType
Response.JobsDetail.Section.*Info.LibResults
Type of the matched risk library. Valid values include 1 (preset white library and black library) and 2 (custom risk library).
Integer
LibName
Response.JobsDetail.Section.*Info.LibResults
Name of the matched risk library.
String
Keywords
Response.JobsDetail.Section.*Info.LibResults
Keywords matched in the library. This parameter may return multiple values, indicating multiple keywords matched.
String Array
UserInfo content in the Container node:
Node Name (Keyword)
Description
Type
Required or Not
TokenId
It usually indicates account information, limited to 128 bytes in length.
String
No
Nickname
It usually indicates nickname information, limited to 128 bytes in length.
String
No
DeviceId
It usually indicates device information, limited to 128 bytes in length.
String
No
AppId
It usually indicates a unique identifier for the App, limited to 128 bytes in length.
String
No
Room
It usually indicates room number information, limited to 128 bytes in length.
String
No
IP
It usually indicates IP address information, limited to 128 bytes in length.
String
No
Type
It usually indicates the business type, limited to 128 bytes in length.
String
No
ReceiveTokenId
It usually indicates the user account for receiving messages, limited to 128 bytes in length.
String
No
Gender
It usually indicates gender data, limited to 128 bytes in length.
String
No
Level
It usually indicates level information, limited to 128 bytes in length.
String
No
Role
It usually indicates role details, limited to 128 bytes in length.
String
No
ListInfo content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
ListResults
Response.JobsDetail.ListInfo
Result of all match lists.
Container Array
ListResults content of the Container node:
Node Name (Keyword)
Parent Node
Description
Type
ListType
Response.JobsDetail.ListInfo.ListResults
Type of match list. Valid values include 0 (whitelist) and 1 (blacklist).
Integer
ListName
Response.JobsDetail.ListInfo.ListResults
Name of the match list.
String
Entity
Response.JobsDetail.ListInfo.ListResults
The matched entry on the list.
String

Error Code

There is no specific error information for this request operation. For common error information, see Error Codes.

Examples

Request

POST /text/auditing HTTP/1.1
Authorization: q-sign-algorithm=sha1&q-ak=AKIDZfbOAo7cllgPvF9cXFrJD0a1ICvR****&q-sign-time=1497530202;1497610202&q-key-time=1497530202;1497610202&q-header-list=&q-url-param-list=&q-signature=28e9a4986df11bed0255e97ff90500557e0e****
Host: examplebucket-1250000000.ci.ap-beijing.myqcloud.com
Content-Length: 166
Content-Type: application/xml
<Request>
<Input>
<Content>54uZ5Ye75omL</Content>
</Input>
<Conf>
<BizType>b81d45f94b91a683255e9a9506f45a11</BizType>
</Conf>
</Request>

Response

HTTP/1.1 200 OK
Content-Type: application/xml
Content-Length: 230
Connection: keep-alive
Date: Thu, 15 Jun 2017 12:37:29 GMT
Server: tencent-ci
x-ci-request-id: NTk0MjdmODlfMjQ4OGY3XzYzYzhf****
<Response>
<JobsDetail>
<JobId>vab1ca9fc8a3ed11ea834c525400863904</JobId>
<Content>54uZ5Ye75omL</Content>
<State>Success</State>
<CreationTime>2019-07-07T12:12:12+0800</CreationTime>
<SectionCount>1</SectionCount>
<Label>Illegal</Label>
<Result>2</Result>
<PornInfo>
<HitFlag>0</HitFlag>
<Count>0</Count>
</PornInfo>
<Section>
<StartByte>0</StartByte>
<Label>Illegal</Label>
<Result>2</Result>
<PornInfo>
<HitFlag>0</HitFlag>
<Score>0</Score>
<Keywords/>
</PornInfo>
</Section>
</JobsDetail>
<RequestId>xxxxxxxxxxxxxx</RequestId>
</Response>


Ajuda e Suporte

Esta página foi útil?

comentários