npm install tencentcloud-webar
import { ArSdk } from 'tencentcloud-webar';
<script charset="utf-8" src="https://webar-static.tencent-cloud.com/ar-sdk/resources/latest/webar-sdk.umd.js"></script><script>// Receive T ArSdk class from window.ARconst { ArSdk } = window.AR;......</script>
// Get the authentication informationconst authData = {licenseKey: 'xxxxxxxxx',appId: 'xxx',authFunc: authFunc // For details, see “Configuring and Using a License - Signature”};// The input streamconst stream = await navigator.mediaDevices.getUserMedia({audio: true,video: { width: w, height: h }})const config = {module: {beautify: true, // Whether to enable the effect module, which offers beautification and makeup effects as well as stickerssegmentation: true, // Whether to enable the keying module, which allows you to change the backgroundsegmentationLevel: 0 // Optional values: 0 | 1 | 2. The higher the value, the better the segmentation effect, but this also increases resource usage and hardware requirements for the device.},auth: authData, // The authentication informationinput: stream, // The input streambeautify: { // The effect parameters for initialization (optional)whiten: 0.1,dermabrasion: 0.3,eye: 0.2,chin: 0,lift: 0.1,shave: 0.2,// For more beauty parameter settings, please refer to the「API Documentation」}// For more config settings, please refer to the「API Documentation」}const sdk = new ArSdk(// Pass in a config object to initialize the SDKconfig)
input, you can also pass in string|HTMLImageElement to process image, and HTMLVideoElement to process video.const config = {auth: authData, // The authentication informationinput: 'https://xxx.png', // The input stream, also support image and video element}const sdk = new ArSdk(// Pass in a config object to initialize the SDKconfig)// You can display the effect and filter list in the `created` callback. For details, see “SDK Integration - Parameters and APIs”.sdk.on('created', () => {// Get the built-in makeup effectssdk.getEffectList({Type: 'Preset',Label: 'Makeup',}).then(res => {effectList = res});// Get the built-in filterssdk.getCommonFilter().then(res => {filterList = res})})// Call `setBeautify`, `setEffect`, or `setFilter` in the `ready` callback// For details, see “SDK Integration - Configuring Effects”sdk.on('ready', () => {// Configure beautification effectssdk.setBeautify({whiten: 0.2});// Configure special effectssdk.setEffect({id: effectList[0].EffectId,intensity: 0.7});// Configure filterssdk.setFilter(filterList[0].EffectId, 0.5)})
ArSdk.prototype.getOutput to get the output stream.
The output streams you get in different callbacks vary slightly. Choose the one that fits your needs.cameraReady callback. Because the SDK hasn’t loaded the resources or completed initialization at this point, the original video will be played, The effect will be automatically applied once the SDK initialization is complete.sdk.on('cameraReady', async () => {// By getting the output stream in the `cameraReady` callback, you can display a video image sooner. However, because the initialization parameters have not taken effect at this point, the output stream obtained will be the same as the original stream.// You can choose this method if you want to display a video image as soon as possible but do not need to apply effects to the video the moment it is displayed.// You don’t need to update the stream after the effects start to work.const output = await ar.getOutput();// Use `video` to preview the output streamconst video = document.createElement('video')video.setAttribute('playsinline', '');video.setAttribute('autoplay', '');video.srcObject = outputdocument.body.appendChild(video)video.play()})
ready playback.sdk.on('ready', async () => {// If you get the output stream in the `ready` callback, because the initialization parameters have taken effect at this point, the output stream obtained will show effects.// The `ready` callback occurs later than `cameraReady`. You can get the output stream in `ready` if you want your video to show effects the moment it is displayed but do not expect it to be displayed as soon as possible.const output = await ar.getOutput();// Use `video` to preview the output streamconst video = document.createElement('video')video.setAttribute('playsinline', '');video.setAttribute('autoplay', '');video.srcObject = outputdocument.body.appendChild(video)video.play()})
MediaStream, you can use a live streaming SDK (for example, TRTC web SDK or LEB web SDK) to publish the stream.const output = await sdk.getOutput()
getOutput is an async API. The output will be returned only after the SDK is initialized and has generated a stream.FPS parameter to getOutput to specify the output frame rate (for example, 15). If you do not pass this parameter, the original frame rate will be kept.getOutput multiple times to generate streams of different frame rates for different scenarios (for example, you can use a high frame rate for preview and a low frame rate for stream publishing).sdk.updateInputStream to update the input stream.
The following code shows you how to use updateInputStream to update the input stream after switching from the computer’s default camera to an external camera.async function getVideoDeviceList(){const devices = await navigator.mediaDevices.enumerateDevices()const videoDevices = []devices.forEach((device)=>{if(device.kind === 'videoinput'){videoDevices.push({label: device.label,id: device.deviceId})}})return videoDevices}async function initDom(){const videoDeviceList = await getVideoDeviceList()let dom = ''videoDeviceList.forEach(device=>{dom = `${dom}<button id=${device.id} onclick='toggleVideo("${device.id}")'>${device.label}<nbutton>`})const div = document.createElement('div');div.id = 'container';div.innerHTML = dom;document.body.appendChild(div);}async function toggleVideo(deviceId){const stream = await navigator.mediaDevices.getUserMedia({video: {deviceId,width: 1280,height: 720,}})// Call an API provided by the SDK to change the input stream.// After the input stream is updated, you don’t need to call `getOutput` again. The SDK will get the output.sdk.updateInputStream(stream, true) // The second parameter defaults to true, indicating that the old mediaTrack will be stopped; if you need to retain it, then set to false.}initDom()
disable and enable to manually pause and resume detection. Pausing detection can reduce CPU usage.<button id="disable">Disable detection</button><button id="enable">Enable detection</button>
// Disable detection and output the original streamdisableButton.onClick = () => {sdk.disable()}// Enable detection and output a processed streamenableButton.onClick = () => {sdk.enable()}
stop and resume methods to pause and play the output screen. In the paused state, the screen remains static and playback is halted.<button id="stop">stop</button><button id="resume">resume</button>
// stop outputstopButton.onClick = () => {sdk.stop()}// resume outputresumeButton.onClick = () => {sdk.resume()}
Feedback