tencent cloud

Integration Process
Last updated:2026-02-11 16:06:30
Integration Process
Last updated: 2026-02-11 16:06:30

Integration Preparations

Sign up for a Tencent Cloud account. For more information, see Signing Up.
Complete enterprise identity verification. For more information, see Enterprise Identity Verification Guide.
Log in to the eKYC console and activate the service.
Contact us to obtain the latest SDK and license.

Overall Architecture Diagram

The following diagram shows the architecture of the selfie verification SDK integration.

image.5


eKYC SDK integration includes two parts:
Client-side integration: Integrate the eKYC SDK into the customer's terminal service app.
Server-side integration: Expose the endpoint of your (merchant) application to your (merchant) server so that the merchant application can interact with the merchant server and then access the eKYC SaaS API to obtain the SdkToken, which is used throughout the selfie verification process and to pull the final verification result.

Overall Interaction Process

You only need to pass in the token and start the corresponding eKYC SDK's liveness detection method to complete liveness detection and return the result.
1. TencentCloud API for obtaining the token: GetFaceldTokenIntl
2. TencentCloud API for pulling the liveness detection result: GetFaceIdResultIntl
The following diagram shows the overall logic of interaction between the SDK, client, and server:



The recommended detailed interaction process is as follows:
1. The customer triggers the merchant application on the terminal to call the liveness verification service scenario.
2. The merchant application sends a request to the merchant server to notify that the liveness detection service token is required for starting liveness verification once.
3. The merchant server passes in relevant parameters to call the TencentCloud API GetFaceldTokenIntl.
4. After receiving the request for calling GetFaceldTokenIntl, the FaceID SaaS delivers the service token to the merchant server.
5. The merchant server delivers the obtained service token to the customer's merchant application.
6. The merchant application calls the eKYC SDK's startup API startHuiYanAuth to pass in the token and configuration information and starts liveness verification.
7. The eKYC SDK captures and uploads the required user data, including liveness data, to the eKYC SaaS.
8. The eKYC SaaS returns the verification result to the eKYC SDK after completing liveness verification (including the selfie verification).
9. The eKYC SDK actively triggers callback to notify the merchant application that the verification is complete and of the verification status.
10. After receiving the callback, the merchant application sends a request to notify the merchant server to obtain the verification result for confirmation.
11. The merchant server actively calls the eKYC SaaS API GetFaceIdResultIntl to pass in the relevant parameters and service token and obtain the verification result.
12. After receiving the request for calling GetFaceIdResultIntl, the eKYC SaaS returns the verification result to the merchant server.
13. After receiving the verification result, the merchant server delivers the required information to the merchant application.
14. The merchant application displays the final result on the UI to notify the customer of the verification result.

Integration

Server Integration

1. Integration preparations

Before server integration, you need to activate the Tencent Cloud eKYC service and obtain TencentCloud API access key SecretId and SecretKey by following the instructions in Getting API Key. In addition, you need to follow the instructions in Connecting to TencentCloud API to import the SDK package with the programming language you are familiar with to your server modules, to ensure that the TencentCloud API can be successfully called and API requests and responses can be properly processed.

2. Integration process

To ensure that your (merchant) client application interacts with your (merchant) server, the merchant server needs to call the API GetFaceIdTokenIntl provided by eKYC to obtain SDKToken, which is used throughout the selfie verification process and used by the API GetFaceIdResultIntl to obtain the liveness comparison result. The merchant server also needs to provide the corresponding endpoint for the merchant client to call. The following sample code with the Golang language is used as an example to show how to call TencentCloud API on the server and obtain the correct response.
Note: This example only demonstrates the processing logic required for interaction between the merchant server and TencentCloud API service. If necessary, you need to implement your own business logic, for example:
After you obtain the SDKToken using the API GetFaceIdTokenIntl, you can return other responses required by the client application to the client along with the SDKToken.
After you obtain the selfie verification result using the API GetFaceIdResultIntl, you can save the returned photo with the best frame rate for later business logic.
var FaceIdClient *faceid.Client

func init() {
// Instantiate a client configuration object. You can specify the timeout period and other configuration items
prof := profile.NewClientProfile()
prof.HttpProfile.ReqTimeout = 60
// TODO replace the SecretId and SecretKey string with the API SecretId and SecretKey
credential := cloud.NewCredential("SecretId", "SecretKey")
var err error
// Instantiate the client object of the requested faceid
FaceIdClient, err = faceid.NewClient(credential, "ap-singapore", prof)
if nil != err {
log.Fatal("FaceIdClient init error: ", err)
}
}

// GetFaceIdToken get token
func GetFaceIdToken(w http.ResponseWriter, r *http.Request) {
log.Println("get face id token")
// Step 1: ... parse parameters
_ = r.ParseForm()
var SecureLevel = r.FormValue("SecureLevel")

// Step 2: instantiate the request object and provide necessary parameters
request := faceid.NewGetFaceIdTokenIntlRequest()
request.SecureLevel = &SecureLevel
// Step 3: call the Tencent Cloud API through FaceIdClient
response, err := FaceIdClient.GetFaceIdTokenIntl(request)

// Step 4: process the Tencent Cloud API response and construct the return object
if nil != err {
_, _ = w.Write([]byte("error"))
return
}
SdkToken := response.Response.SdkToken
apiResp := struct {
SdkToken *string
}{SdkToken: SdkToken}
b, _ := json.Marshal(apiResp)

// ... more codes are omitted

//Step 5: return the service response
_, _ = w.Write(b)
}

// GetFaceIdResult get result
func GetFaceIdResult(w http.ResponseWriter, r *http.Request) {
// Step 1: ... parse parameters
_ = r.ParseForm()
SdkToken := r.FormValue("SdkToken")
// Step 2: instantiate the request object and provide necessary parameters
request := faceid.NewGetFaceIdResultIntlRequest()
request.SdkToken = &SdkToken
// Step 3: call the Tencent Cloud API through FaceIdClient
response, err := FaceIdClient.GetFaceIdResultIntl(request)

// Step 4: process the Tencent Cloud API response and construct the return object
if nil != err {
_, _ = w.Write([]byte("error"))
return
}
result := response.Response.Result
apiResp := struct {
Result *string
}{Result: result}
b, _ := json.Marshal(apiResp)

// ... more codes are omitted

//Step 5: return the service response
_, _ = w.Write(b)
}

func main() {
// expose endpoints
http.HandleFunc("/api/v1/get-token", GetFaceIdToken)
http.HandleFunc("/api/v1/get-result", GetFaceIdResult)
// listening port
err := http.ListenAndServe(":8080", nil)
if nil != err {
log.Fatal("ListenAndServe error: ", err)
}
}

3. API testing

After you complete the integration, you can test whether the current integration is correct by running the postman or curl command. To be specific, access the API (http://ip:port/api/v1/get-token) to check whether SdkToken is returned and access the API (http://ip:port/api/v1/get-result) to check whether the value of the Result field is 0. Through these results, you can determine whether the server integration is successful. For details on responses, see API introduction.

Integration with Android

1. Environment Dependency

The current SDK for Android is compatible with API 19 (Android 4.4) and later versions.

2. SDK integration steps

1. Add the following files (the specific version numbers should be confirmed upon downloading from the official website) to the libs directory of your project.
huiyansdk_android_overseas_1.0.9.6_release.aar
tencent-ai-sdk-youtu-base-1.0.1.39-release.aar
tencent-ai-sdk-common-1.1.36-release.aar
tencent-ai-sdk-aicamera-1.0.22-release.aar
tencent-ai-sdk-network-1.0.2.3.6-release.aar
├── codedemo
│ ├── build.gradle
│ ├── libs
│ │ ├── huiyansdk_android_overseas_1.0.9.14_release.aar
│ │ ├── tencent-ai-sdk-aicamera-1.0.25-release.aar
│ │ ├── tencent-ai-sdk-common-1.1.43-release.aar
│ │ ├── tencent-ai-sdk-network-1.0.2.3.6-release.aar
│ │ └── tencent-ai-sdk-youtu-base-1.0.1.44-release.aar
│ ├── proguard-rules.pro
│ └── src
│ └── main
2. In the build.gradle file of your project ( the build.gradle file located under the App folder ), make the following configurations:
// Set up filtering based on the NDK SO architecture (taking armeabi-v7a as an example; if the device also supports arm64-v8a, that option can be added as well).

defaultConfig {
ndk {
abiFilters 'armeabi-v7a'
}
}
dependencies {
// Introduce the SDK.
implementation files("libs/huiyansdk_android_overseas_1.0.9.5_release.aar")
// Common Algorithm SDK.
implementation files("libs/tencent-ai-sdk-youtu-base-1.0.1.32-release.aar")
// Common Capability Component Library.
implementation files("libs/tencent-ai-sdk-common-1.1.27-release.aar")
implementation files("libs/tencent-ai-sdk-aicamera-1.0.18-release.aar")
implementation files("libs/tencent-ai-sdk-network-1.0.2.3.6-release.aar")

// Third-party libraries that SDK relies on.
// gson
implementation 'com.google.code.gson:gson:2.8.9'
}
3. It is also necessary to include the required permission declarations in the “AndroidManifest.xml” file.
<!-- Camera permission -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature
android:name="android.hardware.camera"
android:required="true" />

<!-- Network permission required by the SDK -->
<uses-permission android:name="android.permission.INTERNET" />

<!-- Dependency required for device risk control -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

<!-- Permissions required for the SDK (optional) -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
To be compatible with Android 6.0 and later versions, in addition to declaring these permissions in the “AndroidManifest.xml” file, it is also necessary to use code to dynamically request the required permissions.

3. API initialization

​This method should be called during the initialization of your APP. It is recommended to call it within the Application framework, as it is primarily used to perform various initialization tasks related to the SDK.
@Override
public void onCreate() {
super.onCreate();
// The SDK needs to be initialized during the application startup process.
HuiYanOsApi.init(this);
}

4. Main processes of the SDK

The integrator only needs to provide the Token and initiate the corresponding liveness detection method to complete the liveness detection process and receive results.
For tokens, refer to the cloud API document: GetFaceIdTokenIntl
For the results obtained after performing the liveness detection, refer to the cloud API document: GetFaceIdResultIntl
The following diagram illustrates the overall interaction logic between the SDK, the client, and the server in a streamlined integration approach.


5.Start the streamlined process of liveness detection for identity verification

// Relevant parameters of HuiYanOs.
HuiYanOsConfig huiYanOsConfig = new HuiYanOsConfig();
// This license file is located in the **assets** folder.
huiYanOsConfig.setAuthLicense("YTFaceSDK.license");
// The license file for device risk control (required when using the device risk control mode) should also be stored in the **assets** folder.
huiYanOsConfig.setRiskLicense("turing.lic");
// Enable the device's risk control capabilities.
huiYanOsConfig.setOpenCheckRiskMode(true);
if (compatCheckBox.isChecked()) {
huiYanOsConfig.setPageColorStyle(PageColorStyle.Dark);
}

// This method is used to initiate the identity verification process. The initial liveness verification step is carried out using the data sent by the backend through the currentToken.
HuiYanOsApi.startHuiYanAuth(currentToken, huiYanOsConfig, new HuiYanOsAuthCallBack() {
@Override
public void onSuccess(HuiYanOsAuthResult authResult) {
// Display the results.
runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(SimplifyActivity.this, "Liveness detection passed!", Toast.LENGTH_SHORT).show();
}
});
}

@Override
public void onFail(int errorCode, String errorMsg, String token) {
String msg = "Liveness detection failed" + "code: " + errorCode + " msg: " + errorMsg + " token: " + token;
Log.e(TAG, "onFail" + msg);
// Display the results.
runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(SimplifyActivity.this, msg, Toast.LENGTH_SHORT).show();
}
});
}
});
HuiYanOsAuthResult is the response indicating that the liveness detection was successful.
Note
You shall apply for the current “YTFaceSDK.license” and “turing.lic” files manually. For now, you can contact Customer Service to request these licenses. Once you have obtained the licenses, please place them in the assets folder.
├── codedemo
│ ├── build.gradle
│ ├── libs
│ ├── proguard-rules.pro
│ └── src
│ └── main
│ └── assets
│ ├── turing.lic
│ └── YTFaceSDK.license

6. SDK resource release

When the App is no longer in use, the SDK resource release API can be called to release these resources.
@Override
protected void onDestroy() {
super.onDestroy();
// Release resources when exiting the App.
HuiYanOsApi.release();
}

7. Obfuscation rule configuration

If your application has the obfuscation feature enabled, please add the following content to your obfuscation file to ensure the proper functioning of the SDK.
# The SDK obfuscation feature includes the following:
-keep class com.tencent.could.huiyansdk.** {*;}
-keep class com.tencent.could.aicamare.** {*;}
-keep class com.tencent.could.component.** {*;}
-keep class com.tencent.youtu.** {*;}
-keep class com.tenpay.utils.SMUtils {*;}
-keep class com.tencent.turingface.** {*;}
-keep class com.turingface.sdk.** {*;}
-keep class com.tencent.cloud.ai.network.** {*;}

Integration with iOS

1. Environment Dependency

1. Development environment: Xcode 11.0 or later
2. SDK (iOS) is compatible with iOS devices running iOS 9.0 or later versions.

2. SDK integration steps

Manual integration method
1. Import relevant libraries and files
Import relevant Framework via Link Binary With Libraries.
2. The libraries required by the SDK are as follows
└──HuiYanOverseasSDK.xcframework
Note
Set the Embed value for the OcrOverseasSDKKit.xcframework package, located in General > Frameworks, Libraries, and Embedded Content, to Embed & Sign.
3. In “Copy Bundle Resources,” import the authorization file and model resource files.
├── YTFaceSDK.license
├── turing.license
└── face-tracker-v003.bundle
4. In “Copy Bundle Resources,” import resource files.
└── HuiYanSDKUI.bundle
5. Integrate via Pod method
Create a folder named “CloudHuiYanSDK_FW” in the root directory of the project. Within this folder, create subfolders named “Frameworks” and “Resources,” and then move the SDK package into these subfolders. The structure should be as follows:
├──Your Project.xcodeproj
├──Podfile
├──CloudHuiYanSDK_FW
├───────CloudHuiYanSDK_FW.podspec
├───────Frameworks
├────────────HuiYanOverseasSDK.xcframework
├───────Resources
├────────────HuiYanSDKUI.bundle
└────────────face-tracker-v003.bundle
Configure it in the Podfile.
target 'HuiYanAuthDemo' do
use_frameworks!
pod 'CloudHuiYanSDK_FW', :path => './CloudHuiYanSDK_FW'
end
Check configuration.
Build Settings -> Framework Search Paths add $(inherited)
Build Settings -> Other Linker Flags add $(inherited)
Pod Install Update
Permission settings
The SDK requires access to the phone’s network and camera. Please add the necessary permission declarations accordingly. In the main project’s “info.plist file,” add the following “key-value” pairs.
<key>Privacy - Camera Usage Description</key>
<string>Permission to enable your camera is required for face recognition.</string>

3.SDK API Instructions

SDK main process
The integrator only needs to input the Token and initiate the corresponding method to complete the liveness detection process and receive results.
For Tokens, refer to the cloud API document: ApplyLivenessToken
For the results obtained after performing the liveness detection, refer to the cloud API document: GetFaceIdResultIntl
The following diagram illustrates the overall interaction logic between the SDK, the client, and the server in a streamlined integration approach.

Start the eKYC process
#import <HuiYanOverseasSDK/HuiYanSDK.h>

// Obtain token.
NSString *faceToken = self.tokenTextField.text;
// Configure SDK.
HuiYanOsConfig *config = [[HuiYanOsConfig alloc] init];
// Configure lic.
config.authLicense = [[NSBundle mainBundle] pathForResource:@"YTFaceSDK.license" ofType:@""];
// Configure timeout for the preparation phase.
config.prepareTimeoutMs = 20000;
// Configure timeout for the motion detection phase.
config.actionTimeoutMs = 20000;
// Delete local liveness detection videos.
config.isDeleteVideoCache = YES;
// Set UI-related callbacks.
config.delegate = self;
// Configure custom multi-language types.
config.languageType = EN;
// config.userLanguageFileName = @"ko";
// config.userLanguageBundleName = @"UseLanguageBundle";
config.iShowTipsPage = YES;

[[HuiYanOSKit sharedInstance] startHuiYaneKYC:faceToken withConfig:config witSuccCallback:^(HuiYanOsAuthResult * _Nonnull authResult, id _Nullable reserved) {
NSString *token = authResult.faceToken;
} withFailCallback:^(int errCode, NSString * _Nonnull errMsg, id _Nullable reserved) {
NSString *showMsg = [NSString stringWithFormat:@"err:%d:%@",errCode,errMsg];
NSLog(@"err:%@",showMsg);
}];
Note:
You shall apply for the current “YTFaceSDK.license” and “turing.license” files manually. For now, you can contact Customer Service to request these licenses. Once you have obtained the licenses, please add them to the resource files (copy Bundle Resources).
Enhance Mode/Plus Mode
Liveness face comparison offers two advanced security modes: Enhance Mode and Plus Mode. These modes enhance the security of liveness detection by combining device-based risk control technologies. To enable these modes, corresponding settings must be made during both the SDK configuration process and the token acquisition phase.
1. SDK configuration requirements
Whether in Enhance Mode or Plus Mode, it is necessary to enable device-based risk control functionality within the SDK and configure the corresponding risk control license file.
// COnfigure SDK.
HuiYanOsConfig *config = [[HuiYanOsConfig alloc] init];
// Configure lic.
config.authLicense = [[NSBundle mainBundle] pathForResource:@"YTFaceSDK.license" ofType:@""];
config.openCheckRiskMode = YES;
// If risk detection is enabled, risk authorization information must be provided.
config.riskLicense = [[NSBundle mainBundle] pathForResource:@"turing.license" ofType:@""];
Note:
You shall apply for the current “turing.license” file manually. For now, you can contact Customer Service to request the license. Once you have obtained the license, place the license file in the current project directory, and add it to the resource files (copy the Bundle Resources).
2. Token acquisition configuration
When calling the “GetFaceIdTokenIntl” or ApplySdkVerificationToken API to obtain the business token, it is necessary to specify whether to use the Enhance mode or the Plus mode by setting the “SdkVersion” parameter.
Enhance mode: Set the corresponding “SdkVersion” value.
Plus mode: Set the corresponding “SdkVersion” value.
For the specific values of “SdkVersion,” please refer to the GetFaceIdTokenIntl API Documentation or the ApplySdkVerificationToken API Documentation.
3. Steps to enable the mode
Set the corresponding “SdkVersion” parameter when obtaining the Token.
Enable device-based risk control functionality in the SDK configuration.
Input the configured Token when calling the “startHuiYaneKYC” method.
The SDK will automatically enable the appropriate security level based on the information contained in the Token.
Note:
Please ensure that the settings for both SDK configuration and Token acquisition are exactly identical; otherwise, the mode may not be enabled.

Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback