tencent cloud

Tencent Effect SDK

Release Notes and Announcements
Release Notes
Tencent Effect SDK V3.5 Version Released
Tencent Effects SDK V3.0 Version Related API and Material Changes
Product Introduction
Overview
Features
Basic Concepts
Strengths
Use Cases
Purchase Guide
Pricing Overview
Purchase Guide
Payment Overdue and Refund
Tutorial
Demos
Free Trial License
Licenses
Adding and Renewing a License (Mobile)
Adding and Renewing a License (Desktop)
Adding and Renewing a License (Web)
FAQs
SDK Download
Features
SDK Download
Version History
SDK Integration Guide(No UI)
General Integration of Tencent Effect
Integrating Capabilities
SDK Integration Guide(Including UI)
General Integration of Tencent Effect
Integrating Tencent Effect into MLVB SDK
Integrating Tencent Effect into TRTC SDK
Integrating Tencent Effect into UGSV SDK
Virtual Avatars
API Documentation
iOS
Android
Flutter
Web
Feature Guide
Reducing SDK Size
SDK Integration Issue Troubleshooting
Performance Fine-Tuning
Effect Fine-Tuning
Material Usage
Effect Parameters
Recommended Parameters in Beautification Scenarios
UGSV Enterprise Edition Migration Guide
Integrating Tencent Effect for Third-Party Publishers (Flutter)
Integrating Beauty AR Web into Mini Programs
Tencent Effect Studio Usage
Beauty AR Web
Overview
Quick Start
SDK Integration
Parameters and APIs
Console Guide
Demos
Preset Effect List
Practical Tutorial
FAQs
FAQs
General
Technical
License
Legacy Documentation
Recommended Parameters in Beautification Scenarios
Beauty Parameters Table
One-Minute Integration of TRTC
One-Minute Integration of Live Streaming
TE SDK Policy
Privacy Policy
Data Processing And Security Agreement
Contact Us

iOS

PDF
Focus Mode
Font Size
Last updated: 2023-02-27 12:15:21

Overview


This capability processes audio data and outputs blendshape data that meets the standards of Apple ARKit. For details, see ARFaceAnchor. You can pass the data to Unity to drive your model or use the data to implement other features.

Integration

Method 1: Integrate the Tencent Effect SDK

1. The capability of converting audio to expressions is built into the Tencent Effect SDK, so to use the capability, you can integrate the Tencent Effect SDK.
2. Download the complete edition of the Tencent Effect SDK.
3. Follow the directions in Integrating Tencent Effect SDK to integrate the SDK.
4. Import Audio2Exp.framework in the SDK to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework to Embed & Sign.

Method 2: Integrate the Audio-to-Expression SDK

If you only need the capability of converting audio to expressions, you can integrate the Audio-to-Expression SDK (Audio2Exp.framework is about 7 MB). Import the two dynamic frameworks Audio2Exp.framework and YTCommonXMagic.framework to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework and YTCommonXMagic.framework to Embed & Sign.

Directions

1. Set the license. For detailed directions, see Integrating Tencent Effect SDK - Step 1. Authenticate.
2. Configure the model file. Copy the model file audio2exp.bundle to your project directory. When calling initWithModelPath: of Audio2ExpApi, pass in the path of model file.

APIs

API
Description
+ (int)initWithModelPath:(NSString*)modelPath;
Initializes the SDK. You need to pass in the path of the model file. 0 indicates the initialization is successful.
+ (NSArray )parseAudio:(NSArray )inputData;
The input is audio, which must be one-channel and have a sample rate of 16 Kbps and an array length of 267 (267 sampling points). The output is a float array with 52 elements, which correspond to 52 blendshapes. The value range of each element is 0-1, and their sequence is specified by Apple{"eyeBlinkLeft","eyeLookDownLeft","eyeLookInLeft","eyeLookOutLeft","eyeLookUpLeft","eyeSquintLeft","eyeWideLeft","eyeBlinkRight","eyeLookDownRight","eyeLookInRight","eyeLookOutRight","eyeLookUpRight","eyeSquintRight","eyeWideRight","jawForward","jawLeft","jawRight","jawOpen","mouthClose","mouthFunnel","mouthPucker","mouthRight","mouthLeft","mouthSmileLeft","mouthSmileRight","mouthFrownRight","mouthFrownLeft","mouthDimpleLeft","mouthDimpleRight","mouthStretchLeft","mouthStretchRight","mouthRollLower","mouthRollUpper","mouthShrugLower","mouthShrugUpper","mouthPressLeft","mouthPressRight","mouthLowerDownLeft","mouthLowerDownRight","mouthUpperUpLeft","mouthUpperUpRight","browDownLeft","browDownRight","browInnerUp","browOuterUpLeft","browOuterUpRight","cheekPuff","cheekSquintLeft","cheekSquintRight","noseSneerLeft","noseSneerRight","tongueOut"}
+ (int)releaseSdk
Releases resources. Call this API when you no longer need the capability.

Integration Code Sample

// Initialize the Audio-to-Expression SDK
NSString *path = [[NSBundle mainBundle] pathForResource:@"audio2exp" ofType:@"bundle"];
int ret = [Audio2ExpApi initWithModelPath:path];
// Convert audio to blendshape data
NSArray *emotionArray = [Audio2ExpApi parseAudio:floatArr];
// Release the SDK
[Audio2ExpApi releaseSdk];

// Use with the Tencent Effect SDK
// Initialize the SDK
self.beautyKit = [[XMagic alloc] initWithRenderSize:previewSize assetsDict:assetsDict];
// Load the avatar materials
[self.beautyKit loadAvatar:bundlePath exportedAvatar:nil completion:nil];
// Pass the blendshape data to the SDK, and the effects will be applied.
[self.beautyKit updateAvatarByExpression:emotionArray];
Note:
For the full code sample, see Demos.
For audio recording, see TXCAudioRecorder.
For more information on using the APIs, see VoiceViewController and the related classes.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback