Feature Overview
The TAS feature helps users effectively detect prohibited text content, identifying undesirable material such as pornographic, illegal/non-compliant, advertising, and abusive content to comprehensively safeguard business security.
Pornography Detection
Accurately detects pornographic text, capable of identifying various variants such as pinyin, character splitting, homophonic variants, etc. The detection algorithm is trained and modeled on massive volumes of non-compliant text data, exhibiting strong anti-interference capabilities.
Ad Detection
Detects spam advertisements, including malicious screen-flooding ads, WeChat business ads, flyposting ads, pornographic traffic-redirection ads, etc. Supports detection of variant techniques such as symbols, icons, Pinyin, emojis, and other methods.
Note:
TAS is a paid service. For pricing details, see Content Moderation Fees. Upon the first usage of this service in a Cloud Infinite account, a free tier resource pack with 100,000 quotas valid for 2 months will be issued. Usage exceeding this quota or after the resource pack expires will be billed at standard rates. When using TAS, please first confirm the relevant limitations and Region. For details, see Usage Limitations. After enabling the TAS feature, whenever new text is generated in your COS Bucket, it will be automatically detected, and automatic freezing (prohibiting public read access) of detected non-compliant content is supported.
Feature Experience
Applicable Scenarios
E-commerce Platform
E-commerce platforms host vast volumes of user reviews with diverse content, which often includes undesirable material such as abusive language and advertisements. TAS comprehensively covers various text content types, detects user-generated content, blocks non-compliant comments, and reduces manual moderation costs.
Social Platform
TAS can be widely applied to BBS, blogs, and various websites with user-generated content (UGC), including scenarios such as posts, replies, and private messages, to detect inappropriate content in text. Through console configuration, it automatically triggers incremental content detection with millisecond-level response times, effectively ensuring a positive browsing experience for users.
Video Platform
TAS can accurately detect offensive, unsafe, or inappropriate content in live video streaming scenarios with massive user interactions like bullet comments and reviews. Unlike traditional manual moderation that relies on human labeling to remove non-compliant content with long cycles, TAS enables rapid identification, automatic blocking, effectively improving moderation efficiency and ensuring platform security.
Prerequisites
You have enabled the COS service, created a Bucket, and uploaded files to the Bucket. For specific operations, see Bucket files. You have activated the Cloud Infinite service and bound the Bucket. For specific operations, see Bucket Binding. Usage
Automated Moderation on Upload
You can activate the service through the Cloud Infinite console. After activation, newly added .txt text files in the Bucket will undergo Content Moderation during upload. For usage details, see the Text Moderation console documentation. Historical Data Scan Auditing
Use API Interfaces
Using SDKs
You can use our SDKs in various languages to perform Content Moderation on images. For details, see the following SDK documentation:
|
Android SDK | |
iOS SDK | |
C++ SDK | |
Go SDK | |
.NET SDK | |
Java SDK | |
JavaScript SDK | |
Python SDK | |
Mini Program SDK | |
Note:
The processing capabilities provided by Cloud Infinite are fully integrated with the COS SDK. You can directly utilize the COS SDK for processing operations.
View Audit Results
Callback settings: You can set the callback address, callback auditing type, callback threshold, etc. to filter callback content. The auditing results will be automatically sent to your callback address, facilitating your subsequent operations. For details on callback content, see the Text Moderation console documentation. Visual processing: After enabling the TAS feature, you can view auditing results based on specified criteria on the auditing details page in the console, and manually process the results. For usage details, see the Moderation Details console documentation.