functions | desc |
Request cross-room call | |
Exit cross-room call | |
Set dashboard margin | |
Call experimental APIs | |
Create room subinstance (for concurrent multi-room listen/watch) | |
Terminate TRTCCloud instance (singleton mode) | |
Terminate room subinstance | |
Enable 3D spatial effect | |
Enable volume reminder | |
Enable custom audio capturing mode | |
Enabling custom audio playback | |
Enable/Disable custom video capturing mode | |
Enable dual-channel encoding mode with big and small images | |
Enable/Disable custom audio track | |
Enter room | |
Exit room | |
Generate custom capturing timestamp | |
Get the capturing volume of local audio | |
Get sound effect management class (TXAudioEffectManager) | |
Get the playback volume of remote audio | |
Get beauty filter management class (TXBeautyManager) | |
Getting playable audio data | |
Get device management class (TXDeviceManager) | |
Get SDK version information | |
Mix custom audio track into SDK | |
Pause/Resume playing back all remote users' audio streams | |
Pause/Resume subscribing to all remote users' video streams | |
Pause/Resume publishing local audio stream | |
Pause/Resume publishing local video stream | |
Pause/Resume playing back remote audio stream | |
Pause/Resume subscribing to remote user's video stream | |
Pause screen sharing | |
Resume screen sharing | |
Deliver captured audio data to SDK | |
Use UDP channel to send custom message to all users in room | |
Deliver captured video frames to SDK | |
Use SEI channel to send custom message to all users in room | |
Set the maximum 3D spatial attenuation range for userId's audio stream | |
Set the capturing volume of local audio | |
Set custom audio data callback | |
Set the playback volume of remote audio | |
Set audio route | |
Set the callback format of original audio frames captured by local mic | |
Enable/Disable console log printing | |
Set subscription mode (which must be set before room entry for it to take effect) | |
Set the adaptation mode of G-sensor | |
Set the queue that drives the TRTCCloudDelegate event callback | |
Set TRTC event callback | |
Set the callback format of preprocessed local audio frames | |
Set the rendering parameters of local video image | |
Set video data callback for third-party beauty filters | |
Set the callback of custom rendering for local video | |
Enable/Disable local log compression | |
Set local log storage path | |
Set log output level | |
Set log callback | |
Set the publish volume and playback volume of mixed custom audio track | |
Set the layout and transcoding parameters of On-Cloud MixTranscoding | |
Set the callback format of audio frames to be played back by system | |
Set network quality control parameters | |
Set the parallel strategy of remote audio streams | |
Set the audio playback volume of remote user | |
Set the rendering mode of remote video image | |
Set the callback of custom rendering for remote video | |
Switch the big/small image of specified remote user | |
Set the video encoding parameters of screen sharing (i.e., substream) (for desktop and mobile systems) | |
Set the mirror mode of image output by encoder | |
Set the encoding parameters of video encoder | |
Set the direction of image output by video encoder | |
Set placeholder image during local video pause | |
Add watermark | |
Create TRTCCloud instance (singleton mode) | |
Display dashboard | |
Screencapture video | |
Start audio recording | |
Enable local audio capturing and publishing | |
Enable the preview image of local camera (mobile) | |
Start local media recording | |
Start publishing audio/video streams to non-Tencent Cloud CDN | |
Publish a stream | |
Start publishing audio/video streams to Tencent Cloud CSS CDN | |
Subscribe to remote user's video stream and bind video rendering control | |
Start screen sharing | |
Start network speed test (used before room entry) | |
Enable system audio capturing (for android system only) | |
Stop subscribing to all remote users' video streams and release all rendering resources | |
Stop audio recording | |
Stop local audio capturing and publishing | |
Stop camera preview | |
Stop local media recording | |
Stop publishing audio/video streams to non-Tencent Cloud CDN | |
Stop publishing | |
Stop publishing audio/video streams to Tencent Cloud CSS CDN | |
Stop subscribing to remote user's video stream and release rendering control | |
Stop screen sharing | |
Stop network speed test | |
Stop system audio capturing (for desktop systems and android system) | |
Switch role(support permission credential) | |
Switch room | |
Update the preview image of local camera | |
Modify publishing parameters | |
Update the specified remote user's position for 3D spatial effect | |
Update remote user's video rendering control | |
Update self position and orientation for 3D spatial effect |
TRTCCloud sharedInstance | (Context context) |
param | desc |
context | It is only applicable to the Android platform. The SDK internally converts it into the ApplicationContext of Android to call the Android system API. |
delete ITRTCCloud*
, a compilation error will occur. Please use destroyTRTCCloud
to release the object pointer. getTRTCShareInstance()
API. getTRTCShareInstance(void *context)
API. void setListener |
param | desc |
listener | callback instance. |
void setListenerHandler | (Handler listenerHandler) |
listenerHandler
, the SDK will use MainQueue
as the queue for driving TRTCCloudListener event callbacks by default. listenerHandler
attribute, all callback functions in TRTCCloudListener will be driven by MainQueue
.param | desc |
listenerHandler | |
listenerHandler
, please do not manipulate the UI in the TRTCCloudListener callback function; otherwise, thread safety issues will occur. void enterRoom | |
| int scene) |
result
parameter will be a positive number ( result
> 0), indicating the time in milliseconds (ms) between function call and room entry. result
parameter will be a negative number ( result
< 0), indicating the TXLiteAVError for room entry failure.param | desc |
param | Room entry parameter, which is used to specify the user's identity, role, authentication credentials, and other information. For more information, please see TRTCParams. |
scene | Application scenario, which is used to specify the use case. The same TRTCAppScene should be configured for all users in the same room. |
scene
is specified as TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom, you must use the role
field in TRTCParams to specify the role of the current user in the room. scene
should be configured for all users in the same room. onExitRoom()
callback in TRTCCloudDelegate to notify you. onExitRoom()
callback, so as to avoid the problem of the camera or mic being occupied. void switchRole | (int role) |
anchor
and audience
. role
field in TRTCParams during room entry to specify the user role in advance or use the switchRole
API to switch roles after room entry.param | desc |
role | Role, which is anchor by default: TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room. TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users. |
scene
you specify in enterRoom is TRTC_APP_SCENE_VIDEOCALL or TRTC_APP_SCENE_AUDIOCALL, please do not call this API. void switchRole | (int role |
| final String privateMapKey) |
anchor
and audience
. role
field in TRTCParams during room entry to specify the user role in advance or use the switchRole
API to switch roles after room entry.param | desc |
privateMapKey | Permission credential used for permission control. If you want only users with the specified userId values to enter a room or push streams, you need to use privateMapKey to restrict the permission. We recommend you use this parameter only if you have high security requirements. For more information, please see Enabling Advanced Permission Control. |
role | Role, which is anchor by default: TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room. TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users. |
scene
you specify in enterRoom is TRTCAppSceneVideoCall or TRTCAppSceneAudioCall, please do not call this API. void switchRoom |
audience
, calling this API is equivalent to exitRoom
(current room) + enterRoom
(new room). anchor
, the API will retain the current audio/video publishing status while switching the room; therefore, during the room switch, camera preview and sound capturing will not be interrupted. switchRoom
can get better smoothness and use less code than exitRoom + enterRoom
. onSwitchRoom(errCode, errMsg)
in TRTCCloudDelegate.param | desc |
config |
config
parameter contains both roomId
and strRoomId
parameters. You should pay special attention as detailed below when specifying these two parameters: strRoomId
, then set roomId
to 0. If both are specified, roomId
will be used. strRoomId
or roomId
at the same time. They cannot be mixed; otherwise, there will be many unexpected bugs. void ConnectOtherRoom | (String param) |
connectOtherRoom()
to successfully call anchor B in room "102": onRemoteUserEnterRoom(B)
and onUserVideoAvailable(B,true)
event callbacks of anchor B; that is, all users in room "101" can subscribe to the audio/video streams of anchor B. onRemoteUserEnterRoom(A)
and onUserVideoAvailable(A,true)
event callbacks of anchor A; that is, all users in room "102" can subscribe to the audio/video streams of anchor A.JSONObject jsonObj = new JSONObject();jsonObj.put("roomId", 102);jsonObj.put("userId", "userB");trtc.ConnectOtherRoom(jsonObj.toString());
roomId
in JSON with strRoomId
, such as {"strRoomId": "102", "userId": "userB"}JSONObject jsonObj = new JSONObject();jsonObj.put("strRoomId", "102");jsonObj.put("userId", "userB");trtc.ConnectOtherRoom(jsonObj.toString());
param | desc |
param | You need to pass in a string parameter in JSON format: roomId represents the room ID in numeric format, strRoomId represents the room ID in string format, and userId represents the user ID of the target anchor. |
void setDefaultStreamRecvMode | (boolean autoRecvAudio |
| boolean autoRecvVideo) |
startRemoteView
API).param | desc |
autoRecvAudio | true: automatic subscription to audio; false: manual subscription to audio by calling muteRemoteAudio(false) . Default value: true |
autoRecvVideo | true: automatic subscription to video; false: manual subscription to video by calling startRemoteView . Default value: true |
TRTCCloud
was originally designed to work in the singleton mode, which limited the ability to watch concurrently in multiple rooms. TRTCCloud
instances, so that you can enter multiple different rooms at the same time to listen/watch audio/video streams. TRTCCloud
instance at any time; that is, you can only publish your audio/video streams in one TRTCCloud
instance at any time.TRTCCloud mainCloud = TRTCCloud.sharedInstance(mContext);mainCloud.enterRoom(params1, TRTCCloudDef.TRTC_APP_SCENE_LIVE);//...//Switch the role from "anchor" to "audience" in your own roommainCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);mainCloud.muteLocalVideo(true);mainCloud.muteLocalAudio(true);//...//Use subcloud to enter another room and switch the role from "audience" to "anchor"TRTCCloud subCloud = mainCloud.createSubCloud();subCloud.enterRoom(params2, TRTCCloudDef.TRTC_APP_SCENE_LIVE);subCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);subCloud.muteLocalVideo(false);subCloud.muteLocalAudio(false);//...//Exit from new room and release it.subCloud.exitRoom();mainCloud.destroySubCloud(subCloud);
roomId
values by using the same userId
. userId
to enter the same room with a specified roomId
. TRTCCloud
instance at any time. If streams are pushed simultaneously in different rooms, a status mess will be caused in the cloud, leading to various bugs. TRTCCloud
instance created by the createSubCloud
API cannot call APIs related to the local audio/video in the subinstance, except switchRole
, muteLocalVideo
, and muteLocalAudio
. To use APIs such as the beauty filter, please use the original TRTCCloud
instance object. void destroySubCloud |
param | desc |
subCloud | |
void startPublishing | (final String streamId |
| final int streamType) |
StreamId
of the live stream through the streamId
parameter, so as to specify the playback address of the user's audio/video streams on CSS CDN. user_stream_001
through this API, then the corresponding CDN playback address is: yourdomain
is your playback domain name with an ICP filing. streamId
when setting the TRTCParams
parameter of enterRoom
, which is the recommended approach.param | desc |
streamId | Custom stream ID. |
streamType | Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported. |
void startPublishCDNStream |
startPublishing
API. The difference is that startPublishing
can only publish audio/video streams to Tencent Cloud CDN, while this API can relay streams to live streaming CDN services of other cloud providers.param | desc |
param |
startPublishing
API to publish audio/video streams to Tencent Cloud CSS CDN does not incur additional fees. startPublishCDNStream
API to publish audio/video streams to non-Tencent Cloud CDN incurs additional relaying bandwidth fees. void setMixTranscodingConfig |
param | desc |
config | If config is not empty, On-Cloud MixTranscoding will be started; otherwise, it will be stopped. For more information, please see TRTCTranscodingConfig. |
streamId
in the config
parameter, TRTC will mix the multiple channels of images in the room into the audio/video streams corresponding to the current user, i.e., A + B => A. streamId
in the config
parameter, TRTC will mix the multiple channels of images in the room into the specified streamId
, i.e., A + B => streamId. config
empty to cancel it; otherwise, additional fees may be incurred. void startPublishMediaStream | |
| |
|
param | desc |
config | The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig. |
params | The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we also recommend you set this parameter. For details, see TRTCStreamEncoderParam. |
target | The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget. |
target
. You will be charged only once for transcoding even if you relay to multiple CDNs. void updatePublishMediaStream | (final String taskId |
| |
| |
|
param | desc |
config | The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig. |
params | The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we recommend you set this parameter. For details, see TRTCStreamEncoderParam. |
target | The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget. |
taskId |
void stopPublishMediaStream | (final String taskId) |
param | desc |
taskId |
taskId
is left empty, the TRTC backend will end all tasks initiated by startPublishMediaStream. You can leave it empty if you have started only one task or want to stop all publishing tasks. void startLocalPreview | (boolean frontCamera |
| TXCloudVideoView view) |
enterRoom
, the SDK will only enable the camera and wait until enterRoom
is called before starting push. enterRoom
, the SDK will enable the camera and automatically start pushing the video stream. onCameraDidReady
callback in TRTCCloudDelegate.param | desc |
frontCamera | true: front camera; false: rear camera |
view | Control that carries the video image |
BeautyManager
before going live, you can: startLocalPreview
before calling enterRoom
startLocalPreview
and muteLocalVideo(true)
after calling enterRoom
void updateLocalView | (TXCloudVideoView view) |
void muteLocalVideo | (int streamType |
| boolean mute) |
startLocalPreview/stopLocalPreview
when TRTCVideoStreamTypeBig is specified, but has higher performance and response speed. startLocalPreview/stopLocalPreview
APIs need to enable/disable the camera, which are hardware device-related operations, so they are very time-consuming. muteLocalVideo
only needs to pause or allow the data stream at the software level, so it is more efficient and more suitable for scenarios where frequent enabling/disabling are needed. onUserVideoAvailable(userId, false)
callback notification. onUserVideoAvailable(userId, true)
callback notification.param | desc |
mute | true: pause; false: resume |
streamType | Specify for which video stream to pause (or resume). Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported |
void setVideoMuteImage | (Bitmap image |
| int fps) |
muteLocalVideo(true)
to pause the local video image, you can set a placeholder image by calling this API. Then, other users in the room will see this image instead of a black screen.param | desc |
fps | Frame rate of the placeholder image. Minimum value: 5. Maximum value: 10. Default value: 5 |
image | Placeholder image. A null value means that no more video stream data will be sent after muteLocalVideo . The default value is null. |
void startRemoteView | (String userId |
| int streamType |
| TXCloudVideoView view) |
userId
and render it to the rendering control specified by the view
parameter. You can set the display mode of the video image through setRemoteRenderParams. userId
of a user who has a video stream in the room, you can directly call startRemoteView
to subscribe to the user's video image. enterRoom
.param | desc |
streamType | Video stream type of the userId specified for watching: HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall (the remote user should enable dual-channel encoding through enableEncSmallVideoStream for this parameter to take effect) Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
view | Rendering control that carries the video image |
userId
at the same time, but does not support watching the big image and small image at the same time. userId
enables dual-channel encoding through enableEncSmallVideoStream can the user's small image be viewed. userId
does not exist, the SDK will switch to the big image of the user by default. void updateRemoteView | (String userId |
| int streamType |
| TXCloudVideoView view) |
param | desc |
streamType | Type of the stream for which to set the preview window (only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported) |
userId | ID of the specified remote user |
view | Control that carries the video image |
void stopRemoteView | (String userId |
| int streamType) |
param | desc |
streamType | Video stream type of the userId specified for watching: HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
void muteRemoteVideoStream | (String userId |
| int streamType |
| boolean mute) |
param | desc |
mute | Whether to pause receiving |
streamType | Specify for which video stream to pause (or resume): HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
void muteAllRemoteVideoStreams | (boolean mute) |
param | desc |
mute | Whether to pause receiving |
void setVideoEncoderParam |
param | desc |
param | It is used to set relevant parameters for the video encoder. For more information, please see TRTCVideoEncParam. |
void setNetworkQosParam |
param | desc |
param | It is used to set relevant parameters for network quality control. For details, please refer to TRTCNetworkQosParam. |
void setLocalRenderParams |
param | desc |
params |
void setRemoteRenderParams | (String userId |
| int streamType |
|
param | desc |
params | |
streamType | It can be set to the primary stream image (TRTCVideoStreamTypeBig) or substream image (TRTCVideoStreamTypeSub). |
userId | ID of the specified remote user |
void setVideoEncoderRotation | (int rotation) |
param | desc |
rotation | Currently, rotation angles of 0 and 180 degrees are supported. Default value: TRTCVideoRotation_0 (no rotation) |
void setVideoEncoderMirror | (boolean mirror) |
param | desc |
mirror | Whether to enable remote mirror mode. true: yes; false: no. Default value: false |
void setGSensorMode | (int mode) |
param | desc |
mode | G-sensor mode. For more information, please see TRTCGSensorMode. Default value: TRTCGSensorMode_UIAutoLayout |
int enableEncSmallVideoStream | (boolean enable |
|
param | desc |
enable | Whether to enable small image encoding. Default value: false |
smallVideoEncParam | Video parameters of small image stream |
int setRemoteVideoStreamType | (String userId |
| int streamType) |
param | desc |
streamType | Video stream type, i.e., big image or small image. Default value: big image |
userId | ID of the specified remote user |
void snapshotVideo | (String userId |
| int streamType |
|
param | desc |
sourceType | Video image source, which can be the video stream image (TRTCSnapshotSourceTypeStream, generally in higher definition) or the video rendering image (TRTCSnapshotSourceTypeView) |
streamType | Video stream type, which can be the primary stream image (TRTCVideoStreamTypeBig, generally for camera) or substream image (TRTCVideoStreamTypeSub, generally for screen sharing) |
userId | User ID. A null value indicates to screencapture the local video. |
void startLocalAudio | (int quality) |
param | desc |
quality | Sound quality TRTCAudioQualitySpeech - Smooth: sample rate: 16 kHz; mono channel; audio bitrate: 16 Kbps. This is suitable for audio call scenarios, such as online meeting and audio call. TRTCAudioQualityDefault - Default: sample rate: 48 kHz; mono channel; audio bitrate: 50 Kbps. This is the default sound quality of the SDK and recommended if there are no special requirements. TRTCAudioQualityMusic - HD: sample rate: 48 kHz; dual channel + full band; audio bitrate: 128 Kbps. This is suitable for scenarios where Hi-Fi music transfer is required, such as online karaoke and music live streaming. |
void muteLocalAudio | (boolean mute) |
muteLocalAudio(true)
does not release the mic permission; instead, it continues to send mute packets with extremely low bitrate. muteLocalAudio
instead of stopLocalAudio
is recommended in scenarios where the requirement for recording file quality is high.param | desc |
mute | true: mute; false: unmute |
void muteRemoteAudio | (String userId |
| boolean mute) |
param | desc |
mute | true: mute; false: unmute |
userId | ID of the specified remote user |
false
after room exit (exitRoom). void muteAllRemoteAudio | (boolean mute) |
param | desc |
mute | true: mute; false: unmute |
false
after room exit (exitRoom). void setAudioRoute | (int route) |
param | desc |
route | Audio route, i.e., whether the audio is output by speaker or receiver. Default value: TRTCAudioModeSpeakerphone |
void setRemoteAudioVolume | (String userId |
| int volume) |
setRemoteAudioVolume(userId, 0)
.param | desc |
userId | ID of the specified remote user |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void setAudioCaptureVolume | (int volume) |
param | desc |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void setAudioPlayoutVolume | (int volume) |
param | desc |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void enableAudioVolumeEvaluation | (int interval |
| boolean enable_vad) |
param | desc |
enable_vad | true: Enable the voice detection of the local user false: Disable the voice detection of the local user |
interval | Set the interval in ms for triggering the onUserVoiceVolume callback. The minimum interval is 100 ms. If the value is smaller than or equal to 0, the callback will be disabled. We recommend you set this parameter to 300 ms. |
startLocalAudio
. int startAudioRecording |
stopAudioRecording
before room exit, it will be automatically stopped after room exit.param | desc |
param |
void startLocalRecording |
param | desc |
params |
void setRemoteAudioParallelParams |
param | desc |
params |
void enable3DSpatialAudioEffect | (boolean enabled) |
param | desc |
enabled | Whether to enable 3D spatial effect. It’s disabled by default. |
void updateSelf3DSpatialPosition | (int[] position |
| float[] axisForward |
| float[] axisRight |
| float[] axisUp) |
param | desc |
axisForward | The unit vector of the forward axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
axisRight | The unit vector of the right axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
axisUp | The unit vector of the up axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
position | The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn. |
void updateRemote3DSpatialPosition | (String userId |
| int[] position) |
param | desc |
position | The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn. |
userId | ID of the specified remote user. |
void set3DSpatialReceivingRange | (String userId |
| int range) |
param | desc |
range | Maximum attenuation range of the audio stream. |
userId | ID of the specified user. |
void setWatermark | (Bitmap image |
| int streamType |
| float x |
| float y |
| float width) |
rect
parameter, which is a quadruple in the format of (x, y, width, height). rect
parameter is set to (0.1, 0.1, 0.2, 0.0),param | desc |
image | Watermark image, which must be a PNG image with transparent background |
rect | Unified coordinates of the watermark relative to the encoded resolution. Value range of x , y , width , and height : 0–1. |
streamType | Specify for which image to set the watermark. For more information, please see TRTCVideoStreamType. |
streamType
set to different values. TXAudioEffectManager
is a sound effect management API, through which you can implement the following features: isShortFile
parameter to true
). void startScreenCapture | (int streamType |
| |
|
param | desc |
encParams | Encoding parameters. For more information, please see TRTCCloudDef#TRTCVideoEncParam. If encParams is set to null , the SDK will automatically use the previously set encoding parameter. |
shareParams | For more information, please see TRTCCloudDef#TRTCScreenShareParams. You can use the floatingView parameter to pop up a floating window (you can also use Android's WindowManager parameter to configure automatic pop-up). |
void setSubStreamEncoderParam |
param | desc |
param |
type=TRTCVideoStreamTypeBig
when calling startScreenCapture
), you still need to call the setSubStreamEncoderParam API instead of the setVideoEncoderParam API to set the screen sharing encoding parameters. void enableCustomVideoCapture | (int streamType |
| boolean enable) |
param | desc |
enable | Whether to enable. Default value: false |
streamType | Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image). |
void sendCustomVideoData | (int streamType |
|
param | desc |
frame | Video data. If the memory-based delivery scheme is used, please set the data field; if the video memory-based delivery scheme is used, please set the TRTCTexture field. For more information, please see com::tencent::trtc::TRTCCloudDef::TRTCVideoFrame TRTCVideoFrame. |
streamType | Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image). |
timestamp
value of a video frame immediately after capturing it, so as to achieve the best audio/video sync effect. void enableCustomAudioCapture | (boolean enable) |
param | desc |
enable | Whether to enable. Default value: false |
void sendCustomAudioData |
TRTCAudioFrameFormatPCM
.param | desc |
frame | Audio data |
void enableMixExternalAudioFrame | (boolean enablePublish |
| boolean enablePlayout) |
param | desc |
enablePlayout | Whether the mixed audio track should be played back locally. Default value: false |
enablePublish | Whether the mixed audio track should be played back remotely. Default value: false |
enablePublish
and enablePlayout
as false
, the custom audio track will be completely closed. int mixExternalAudioFrame |
50
is returned, it indicates that the buffer pool has 50 ms of audio data. As long as you call this API again within 50 ms, the SDK can make sure that continuous audio data is mixed. 100
or greater, you can wait after an audio frame is played to call the API again. If the value returned is smaller than 100
, then there isn’t enough data in the buffer pool, and you should feed more audio data into the SDK until the data in the buffer pool is above the safety level. data
: audio frame buffer. Audio frames must be in PCM format. Each frame can be 5-100 ms (20 ms is recommended) in duration. Assume that the sample rate is 48000, and sound channels mono-channel. Then the frame size would be 48000 x 0.02s x 1 x 16 bit = 15360 bit = 1920 bytes. sampleRate
: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000 channel
: number of sound channels (if dual-channel is used, data is interleaved). Valid values: 1
(mono-channel); 2
(dual channel) timestamp
: timestamp (ms). Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting an audio frame.param | desc |
frame | Audio data |
void setMixExternalAudioVolume | (int publishVolume |
| int playoutVolume) |
param | desc |
playoutVolume | set the play volume,from 0 to 100, -1 means no change |
publishVolume | set the publish volume,from 0 to 100, -1 means no change |
timestamp
field in TRTCVideoFrame or TRTCAudioFrame. int setLocalVideoProcessListener | (int pixelFormat |
| int bufferType |
|
listener
you set and use them for further processing by a third-party beauty filter component. Then, the SDK will encode and send the processed video frames.param | desc |
bufferType | Specify the format of the data called back. Currently, it supports: TRTC_VIDEO_BUFFER_TYPE_TEXTURE: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_Texture_2D. TRTC_VIDEO_BUFFER_TYPE_BYTE_BUFFER: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420. TRTC_VIDEO_BUFFER_TYPE_BYTE_ARRAY: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420. |
listener | |
pixelFormat | Specify the format of the pixel called back. Currently, it supports: TRTC_VIDEO_PIXEL_FORMAT_Texture_2D: video memory-based texture scheme. TRTC_VIDEO_PIXEL_FORMAT_I420: memory-based data scheme. |
int setLocalVideoRenderListener | (int pixelFormat |
| int bufferType |
|
pixelFormat
specifies the format of the data called back. Currently, Texture2D, I420, and RGBA formats are supported. bufferType
specifies the buffer type. BYTE_BUFFER
is suitable for the JNI layer, while BYTE_ARRAY
can be used in direct operations at the Java layer.param | desc |
bufferType | Specify the data structure of the video frame: TRTC_VIDEO_BUFFER_TYPE_TEXTURE: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_Texture_2D. TRTC_VIDEO_BUFFER_TYPE_BYTE_BUFFER: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420 or TRTC_VIDEO_PIXEL_FORMAT_RGBA. TRTC_VIDEO_BUFFER_TYPE_BYTE_ARRAY: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420 or TRTC_VIDEO_PIXEL_FORMAT_RGBA. |
listener | Callback of custom video rendering. The callback is returned once for each video frame |
pixelFormat | Specify the format of the video frame, such as: TRTC_VIDEO_PIXEL_FORMAT_Texture_2D: OpenGL texture format, which is suitable for GPU processing and has a high processing efficiency. TRTC_VIDEO_PIXEL_FORMAT_I420: standard I420 format, which is suitable for CPU processing and has a poor processing efficiency. TRTC_VIDEO_PIXEL_FORMAT_RGBA: RGBA format, which is suitable for CPU processing and has a poor processing efficiency. |
int setRemoteVideoRenderListener | (String userId |
| int pixelFormat |
| int bufferType |
|
pixelFormat
specifies the format of the called back data, such as NV12, I420, and 32BGRA. bufferType
specifies the buffer type. PixelBuffer
has the highest efficiency, while NSData
makes the SDK perform a memory conversion internally, which will result in extra performance loss.param | desc |
bufferType | Specify video data structure type. |
listener | listen for custom rendering |
pixelFormat | Specify the format of the pixel called back |
userId | ID of the specified remote user |
startRemoteView(nil)
needs to be called to get the video stream of the remote user ( view
can be set to nil
for this end); otherwise, there will be no data called back. void setAudioFrameListener |
int setCapturedRawAudioFrameCallbackFormat |
param | desc |
format | Audio data callback format |
int setLocalProcessedAudioFrameCallbackFormat |
param | desc |
format | Audio data callback format |
int setMixedPlayAudioFrameCallbackFormat |
param | desc |
format | Audio data callback format |
void enableCustomAudioRendering | (boolean enable) |
param | desc |
enable | Whether to enable custom audio playback. It’s disabled by default. |
void getCustomAudioRenderingFrame |
sampleRate
: sample rate (required). Valid values: 16000, 24000, 32000, 44100, 48000 channel
: number of sound channels (required). 1
: mono-channel; 2
: dual-channel; if dual-channel is used, data is interleaved. data
: the buffer used to get audio data. You need to allocate memory for the buffer based on the duration of an audio frame.param | desc |
audioFrame | Audio frames |
sampleRate
and channel
in audioFrame
, and allocate memory for one frame of audio in advance. sampleRate
and channel
. boolean sendCustomCmdMsg | (int cmdID |
| byte[] data |
| boolean reliable |
| boolean ordered) |
onRecvCustomCmdMsg
callback in TRTCCloudDelegate.param | desc |
cmdID | Message ID. Value range: 1–10 |
data | Message to be sent. The maximum length of one single message is 1 KB. |
ordered | Whether orderly sending is enabled, i.e., whether the data packets should be received in the same order in which they are sent; if so, a certain delay will be caused. |
reliable | Whether reliable sending is enabled. Reliable sending can achieve a higher success rate but with a longer reception delay than unreliable sending. |
reliable
and ordered
must be set to the same value ( true
or false
) and cannot be set to different values currently. cmdID
values for messages of different types. This can reduce message delay when orderly sending is required. boolean sendSEIMsg | (byte[] data |
| int repeatCount) |
onRecvSEIMsg
callback in TRTCCloudDelegate.param | desc |
data | Data to be sent, which can be up to 1 KB (1,000 bytes) |
repeatCount | Data sending count |
sendCustomCmdMsg
). sendCustomCmdMsg
). If a large amount of data is sent, the video bitrate will increase, which may reduce the video quality or even cause lagging. sendCustomCmdMsg
). repeatCount
> 1), the data will be inserted into subsequent repeatCount
video frames in a row for sending, which will increase the video bitrate. repeatCount
is greater than 1, the data will be sent for multiple times, and the same message may be received multiple times in the onRecvSEIMsg
callback; therefore, deduplication is required. int startSpeedTest |
param | desc |
params | speed test options |
void setLogLevel | (int level) |
param | desc |
level |
void setConsoleEnabled | (boolean enabled) |
param | desc |
enabled | Specify whether to enable it, which is disabled by default |
void setLogCompressEnabled | (boolean enabled) |
param | desc |
enabled | Specify whether to enable it, which is enabled by default |
void setLogDirPath | (String path) |
%appdata%/liteav/log
. sandbox Documents/log
. /app directory/files/log/liteav/
.param | desc |
path | Log storage path |
void setLogListener |
void showDebugView | (int showType) |
param | desc |
showType | 0: does not display; 1: displays lite edition (only with audio/video information); 2: displays full edition (with audio/video information and event information). |
public TRTCViewMargin | (float leftMargin |
| float rightMargin |
| float topMargin |
| float bottomMargin) |
showDebugView
for it to take effect.param | desc |
margin | Inner margin of the dashboard. It should be noted that this is based on the percentage of parentView . Value range: 0–1 |
userId | User ID |
void callExperimentalAPI | (String jsonStr) |
Was this page helpful?