tencent cloud

Feedback

TRTCCloud

Last updated: 2024-10-25 17:04:23
    Copyright (c) 2021 Tencent. All rights reserved.
    
    Module: TRTCCloud @ TXLiteAVSDK
    
    Function: TRTC's main feature API
    
    Version: 12.1
    
    TRTCCloud

    TRTCCloud

    FuncList
    DESC
    Create TRTCCloud instance (singleton mode)
    Terminate TRTCCloud instance (singleton mode)
    Add TRTC event callback
    Remove TRTC event callback
    Set the queue that drives the TRTCCloudDelegate event callback
    Enter room
    Exit room
    Switch role
    Switch role(support permission credential)
    Switch room
    Request cross-room call
    Exit cross-room call
    Set subscription mode (which must be set before room entry for it to take effect)
    Create room subinstance (for concurrent multi-room listen/watch)
    Terminate room subinstance
    
    Publish a stream
    Modify publishing parameters
    Stop publishing
    Enable the preview image of local camera (mobile)
    Enable the preview image of local camera (desktop)
    Update the preview image of local camera
    Stop camera preview
    Pause/Resume publishing local video stream
    Set placeholder image during local video pause
    Subscribe to remote user's video stream and bind video rendering control
    Update remote user's video rendering control
    Stop subscribing to remote user's video stream and release rendering control
    Stop subscribing to all remote users' video streams and release all rendering resources
    Pause/Resume subscribing to remote user's video stream
    Pause/Resume subscribing to all remote users' video streams
    Set the encoding parameters of video encoder
    Set network quality control parameters
    Set the rendering parameters of local video image
    Set the rendering mode of remote video image
    Enable dual-channel encoding mode with big and small images
    Switch the big/small image of specified remote user
    Screencapture video
    Sets perspective correction coordinate points.
    Set the adaptation mode of gravity sensing (version 11.7 and above)
    Enable local audio capturing and publishing
    Stop local audio capturing and publishing
    Pause/Resume publishing local audio stream
    Pause/Resume playing back remote audio stream
    Pause/Resume playing back all remote users' audio streams
    Set audio route
    Set the audio playback volume of remote user
    Set the capturing volume of local audio
    Get the capturing volume of local audio
    Set the playback volume of remote audio
    Get the playback volume of remote audio
    Enable volume reminder
    Start audio recording
    Stop audio recording
    Start local media recording
    Stop local media recording
    Set the parallel strategy of remote audio streams
    Enable 3D spatial effect
    Update self position and orientation for 3D spatial effect
    Update the specified remote user's position for 3D spatial effect
    Set the maximum 3D spatial attenuation range for userId's audio stream
    Get device management class (TXDeviceManager)
    Get beauty filter management class (TXBeautyManager)
    Add watermark
    Get sound effect management class (TXAudioEffectManager)
    Enable system audio capturing(iOS not supported)
    Stop system audio capturing(iOS not supported)
    Set the volume of system audio capturing
    Start in-app screen sharing (for iOS 13.0 and above only)
    Start system-level screen sharing (for iOS 11.0 and above only)
    Start screen sharing
    Stop screen sharing
    Pause screen sharing
    Resume screen sharing
    Enumerate shareable screens and windows (for macOS only)
    Select the screen or window to share (for macOS only)
    Set the video encoding parameters of screen sharing (i.e., substream) (for desktop and mobile systems)
    Set the audio mixing volume of screen sharing (for desktop systems only)
    Add specified windows to the exclusion list of screen sharing (for desktop systems only)
    Remove specified windows from the exclusion list of screen sharing (for desktop systems only)
    Remove all windows from the exclusion list of screen sharing (for desktop systems only)
    Add specified windows to the inclusion list of screen sharing (for desktop systems only)
    Remove specified windows from the inclusion list of screen sharing (for desktop systems only)
    Remove all windows from the inclusion list of screen sharing (for desktop systems only)
    Enable/Disable custom video capturing mode
    Deliver captured video frames to SDK
    Enable custom audio capturing mode
    Deliver captured audio data to SDK
    Enable/Disable custom audio track
    Mix custom audio track into SDK
    Set the publish volume and playback volume of mixed custom audio track
    Generate custom capturing timestamp
    Set video data callback for third-party beauty filters
    Set the callback of custom rendering for local video
    Set the callback of custom rendering for remote video
    Set custom audio data callback
    Set the callback format of audio frames captured by local mic
    Set the callback format of preprocessed local audio frames
    Set the callback format of audio frames to be played back by system
    Enabling custom audio playback
    Getting playable audio data
    Use UDP channel to send custom message to all users in room
    Use SEI channel to send custom message to all users in room
    Start network speed test (used before room entry)
    Stop network speed test
    Get SDK version information
    Set log output level
    Enable/Disable console log printing
    Enable/Disable local log compression
    Set local log storage path
    Set log callback
    Display dashboard
    Set dashboard margin
    Call experimental APIs
    Enable or disable private encryption of media streams

    sharedInstance

    sharedInstance

    Create TRTCCloud instance (singleton mode)

    Param
    DESC
    context
    It is only applicable to the Android platform. The SDK internally converts it into the ApplicationContext of Android to call the Android system API.
    Note
    1. If you use delete ITRTCCloud* , a compilation error will occur. Please use destroyTRTCCloud to release the object pointer.
    2. On Windows, macOS, or iOS, please call the getTRTCShareInstance() API.
    3. On Android, please call the getTRTCShareInstance(void *context) API.

    destroySharedInstance

    destroySharedInstance

    Terminate TRTCCloud instance (singleton mode)

    addDelegate:

    addDelegate:
    - (void)addDelegate:
    (id<TRTCCloudDelegate>)delegate

    Add TRTC event callback

    You can use TRTCCloudDelegate to get various event notifications from the SDK, such as error codes, warning codes, and audio/video status parameters.

    removeDelegate:

    removeDelegate:
    - (void)removeDelegate:
    (id<TRTCCloudDelegate>)delegate

    Remove TRTC event callback

    delegateQueue

    delegateQueue

    Set the queue that drives the TRTCCloudDelegate event callback

    If you do not specify a delegateQueue , the SDK will use MainQueue as the queue for driving TRTCCloudDelegate event callbacks by default.
    In other words, if you do not set the delegateQueue attribute, all callback functions in TRTCCloudDelegate will be driven by MainQueue .
    Note
    If you specify a delegateQueue , please do not manipulate the UI in the TRTCCloudDelegate callback function; otherwise, thread safety issues will occur.

    enterRoom:appScene:

    enterRoom:appScene:
    - (void)enterRoom:
    (TRTCParams *)param
    appScene:
    (TRTCAppScene)scene

    Enter room

    All TRTC users need to enter a room before they can "publish" or "subscribe to" audio/video streams. "Publishing" refers to pushing their own streams to the cloud, and "subscribing to" refers to pulling the streams of other users in the room from the cloud.
    
    When calling this API, you need to specify your application scenario (TRTCAppScene) to get the best audio/video transfer experience. We provide the following four scenarios for your choice:
    Video call scenario. Use cases: [one-to-one video call], [video conferencing with up to 300 participants], [online medical diagnosis], [small class], [video interview], etc.
    In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.
    Audio call scenario. Use cases: [one-to-one audio call], [audio conferencing with up to 300 participants], [audio chat], [online Werewolf], etc.
    In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.
    Live streaming scenario. Use cases: [low-latency video live streaming], [interactive classroom for up to 100,000 participants], [live video competition], [video dating room], [remote training], [large-scale conferencing], etc.
    In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor) or audience (TRTCRoleAudience).
    Audio chat room scenario. Use cases: [Clubhouse], [online karaoke room], [music live room], [FM radio], etc.
    In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor) or audience (TRTCRoleAudience).
    
    After calling this API, you will receive the onEnterRoom(result) callback from TRTCCloudDelegate:
    If room entry succeeded, the result parameter will be a positive number ( result > 0), indicating the time in milliseconds (ms) between function call and room entry.
    If room entry failed, the result parameter will be a negative number ( result < 0), indicating the TXLiteAVError for room entry failure.
    Param
    DESC
    param
    Room entry parameter, which is used to specify the user's identity, role, authentication credentials, and other information. For more information, please see TRTCParams.
    scene
    Application scenario, which is used to specify the use case. The same TRTCAppScene should be configured for all users in the same room.
    Note
    1. If scene is specified as TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom, you must use the role field in TRTCParams to specify the role of the current user in the room.
    2. The same scene should be configured for all users in the same room.
    3. Please try to ensure that enterRoom and exitRoom are used in pair; that is, please make sure that "the previous room is exited before the next room is entered"; otherwise, many issues may occur.

    exitRoom

    exitRoom

    Exit room

    Calling this API will allow the user to leave the current audio or video room and release the camera, mic, speaker, and other device resources.
    After resources are released, the SDK will use the onExitRoom() callback in TRTCCloudDelegate to notify you.
    
    If you need to call enterRoom again or switch to the SDK of another provider, we recommend you wait until you receive the onExitRoom() callback, so as to avoid the problem of the camera or mic being occupied.

    switchRole:

    switchRole:
    -(void)switchRole:
    (TRTCRoleType)role

    Switch role

    This API is used to switch the user role between anchor and audience .
    
    As video live rooms and audio chat rooms need to support an audience of up to 100,000 concurrent online users, the rule "only anchors can publish their audio/video streams" has been set. Therefore, when some users want to publish their streams (so that they can interact with anchors), they need to switch their role to "anchor" first.
    
    You can use the role field in TRTCParams during room entry to specify the user role in advance or use the switchRole API to switch roles after room entry.
    Param
    DESC
    role
    Role, which is anchor by default:
    TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room.
    TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users.
    Note
    1. This API is only applicable to two scenarios: live streaming (TRTCAppSceneLIVE) and audio chat room (TRTCAppSceneVoiceChatRoom).
    2. If the scene you specify in enterRoom is TRTCAppSceneVideoCall or TRTCAppSceneAudioCall, please do not call this API.

    switchRole:privateMapKey:

    switchRole:privateMapKey:
    -(void)switchRole:
    (TRTCRoleType)role
    privateMapKey:
    (NSString*)privateMapKey

    Switch role(support permission credential)

    This API is used to switch the user role between anchor and audience .
    
    As video live rooms and audio chat rooms need to support an audience of up to 100,000 concurrent online users, the rule "only anchors can publish their audio/video streams" has been set. Therefore, when some users want to publish their streams (so that they can interact with anchors), they need to switch their role to "anchor" first.
    
    You can use the role field in TRTCParams during room entry to specify the user role in advance or use the switchRole API to switch roles after room entry.
    Param
    DESC
    privateMapKey
    Permission credential used for permission control. If you want only users with the specified userId values to enter a room or push streams, you need to use privateMapKey to restrict the permission.
    We recommend you use this parameter only if you have high security requirements. For more information, please see Enabling Advanced Permission Control.
    role
    Role, which is anchor by default:
    TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room.
    TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users.
    Note
    1. This API is only applicable to two scenarios: live streaming (TRTCAppSceneLIVE) and audio chat room (TRTCAppSceneVoiceChatRoom).
    2. If the scene you specify in enterRoom is TRTCAppSceneVideoCall or TRTCAppSceneAudioCall, please do not call this API.

    switchRoom:

    switchRoom:
    - (void)switchRoom:
    (TRTCSwitchRoomConfig *)config

    Switch room

    This API is used to quickly switch a user from one room to another.
    If the user's role is audience , calling this API is equivalent to exitRoom (current room) + enterRoom (new room).
    If the user's role is anchor , the API will retain the current audio/video publishing status while switching the room; therefore, during the room switch, camera preview and sound capturing will not be interrupted.
    
    This API is suitable for the online education scenario where the supervising teacher can perform fast room switch across multiple rooms. In this scenario, using switchRoom can get better smoothness and use less code than exitRoom + enterRoom .
    The API call result will be called back through onSwitchRoom(errCode, errMsg) in TRTCCloudDelegate.
    Param
    DESC
    config
    Room parameter. For more information, please see TRTCSwitchRoomConfig.
    Note
    Due to the requirement for compatibility with legacy versions of the SDK, the config parameter contains both roomId and strRoomId parameters. You should pay special attention as detailed below when specifying these two parameters:
    1. If you decide to use strRoomId , then set roomId to 0. If both are specified, roomId will be used.
    2. All rooms need to use either strRoomId or roomId at the same time. They cannot be mixed; otherwise, there will be many unexpected bugs.

    connectOtherRoom:

    connectOtherRoom:
    - (void)connectOtherRoom:
    (NSString *)param

    Request cross-room call

    By default, only users in the same room can make audio/video calls with each other, and the audio/video streams in different rooms are isolated from each other.
    However, you can publish the audio/video streams of an anchor in another room to the current room by calling this API. At the same time, this API will also publish the local audio/video streams to the target anchor's room.
    
    In other words, you can use this API to share the audio/video streams of two anchors in two different rooms, so that the audience in each room can watch the streams of these two anchors. This feature can be used to implement anchor competition.
    
    The result of requesting cross-room call will be returned through the onConnectOtherRoom callback in TRTCCloudDelegate.
    
    For example, after anchor A in room "101" uses connectOtherRoom() to successfully call anchor B in room "102":
    All users in room "101" will receive the onRemoteUserEnterRoom(B) and onUserVideoAvailable(B,YES) event callbacks of anchor B; that is, all users in room "101" can subscribe to the audio/video streams of anchor B.
    All users in room "102" will receive the onRemoteUserEnterRoom(A) and onUserVideoAvailable(A,YES) event callbacks of anchor A; that is, all users in room "102" can subscribe to the audio/video streams of anchor A.
    
    
    
    For compatibility with subsequent extended fields for cross-room call, parameters in JSON format are used currently.
    
    Case 1: numeric room ID
    If anchor A in room "101" wants to co-anchor with anchor B in room "102", then anchor A needs to pass in {"roomId": 102, "userId": "userB"} when calling this API.
    Below is the sample code:
    NSMutableDictionaryjsonDict = [[NSMutableDictionary alloc] init];
    [jsonDict setObject:@(102) forKey:@"roomId"];
    [jsonDict setObject:@"userB" forKey:@"userId"];
    NSData* jsonData = [NSJSONSerialization dataWithJSONObject:jsonDict options:NSJSONWritingPrettyPrinted error:nil];
    NSString* jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    [trtc connectOtherRoom:jsonString];
    
    Case 2: string room ID
    If you use a string room ID, please be sure to replace the roomId in JSON with strRoomId , such as {"strRoomId": "102", "userId": "userB"}
    Below is the sample code:
    NSMutableDictionaryjsonDict = [[NSMutableDictionary alloc] init];
    [jsonDict setObject:@"102" forKey:@"strRoomId"];
    [jsonDict setObject:@"userB" forKey:@"userId"];
    NSData* jsonData = [NSJSONSerialization dataWithJSONObject:jsonDict options:NSJSONWritingPrettyPrinted error:nil];
    NSString* jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    [trtc connectOtherRoom:jsonString];
    Param
    DESC
    param
    You need to pass in a string parameter in JSON format: roomId represents the room ID in numeric format, strRoomId represents the room ID in string format, and userId represents the user ID of the target anchor.

    disconnectOtherRoom

    disconnectOtherRoom

    Exit cross-room call

    The result will be returned through the onDisconnectOtherRoom() callback in TRTCCloudDelegate.

    setDefaultStreamRecvMode:video:

    setDefaultStreamRecvMode:video:
    - (void)setDefaultStreamRecvMode:
    (BOOL)autoRecvAudio
    video:
    (BOOL)autoRecvVideo

    Set subscription mode (which must be set before room entry for it to take effect)

    You can switch between the "automatic subscription" and "manual subscription" modes through this API:
    Automatic subscription: this is the default mode, where the user will immediately receive the audio/video streams in the room after room entry, so that the audio will be automatically played back, and the video will be automatically decoded (you still need to bind the rendering control through the startRemoteView API).
    Manual subscription: after room entry, the user needs to manually call the startRemoteView API to start subscribing to and decoding the video stream and call the muteRemoteAudio (NO) API to start playing back the audio stream.
    
    In most scenarios, users will subscribe to the audio/video streams of all anchors in the room after room entry. Therefore, TRTC adopts the automatic subscription mode by default in order to achieve the best "instant streaming experience".
    In your application scenario, if there are many audio/video streams being published at the same time in each room, and each user only wants to subscribe to 1–2 streams of them, we recommend you use the "manual subscription" mode to reduce the traffic costs.
    Param
    DESC
    autoRecvAudio
    YES: automatic subscription to audio; NO: manual subscription to audio by calling muteRemoteAudio(NO) . Default value: YES
    autoRecvVideo
    YES: automatic subscription to video; NO: manual subscription to video by calling startRemoteView . Default value: YES
    Note
    1. The configuration takes effect only if this API is called before room entry (enterRoom).
    2. In the automatic subscription mode, if the user does not call startRemoteView to subscribe to the video stream after room entry, the SDK will automatically stop subscribing to the video stream in order to reduce the traffic consumption.

    createSubCloud

    createSubCloud

    Create room subinstance (for concurrent multi-room listen/watch)

    TRTCCloud was originally designed to work in the singleton mode, which limited the ability to watch concurrently in multiple rooms.
    By calling this API, you can create multiple TRTCCloud instances, so that you can enter multiple different rooms at the same time to listen/watch audio/video streams.
    
    However, it should be noted that your ability to publish audio and video streams in multiple TRTCCloud instances will be limited.
    
    This feature is mainly used in the "super small class" use case in the online education scenario to break the limit that "only up to 50 users can publish their audio/video streams simultaneously in one TRTC room".
    
    Below is the sample code:
    //In the small room that needs interaction, enter the room as an anchor and push audio and video streams
    TRTCCloud *mainCloud = [TRTCCloud sharedInstance];
    TRTCParams *mainParams = [[TRTCParams alloc] init];
    //Fill your params
    mainParams.role = TRTCRoleAnchor;
    [mainCloud enterRoom:mainParams appScene:TRTCAppSceneLIVE)];
    //...
    [mainCloud startLocalPreview:YES view:videoView];
    [mainCloud startLocalAudio:TRTCAudioQualityDefault];
    
    //In the large room that only needs to watch, enter the room as an audience and pull audio and video streams
    TRTCCloud *subCloud = [mainCloud createSubCloud];
    TRTCParams *subParams = [[TRTCParams alloc] init];
    //Fill your params
    subParams.role = TRTCRoleAudience;
    [subCloud enterRoom:subParams appScene:TRTCAppSceneLIVE)];
    //...
    [subCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:videoView];
    //...
    //Exit from new room and release it.
    [subCloud exitRoom];
    [mainCloud destroySubCloud:subCloud];
    Note
    The same user can enter multiple rooms with different roomId values by using the same userId .
    Two devices cannot use the same userId to enter the same room with a specified roomId .
    You can set TRTCCloudDelegate separately for different instances to get their own event notifications.
    The same user can push streams in multiple TRTCCloud instances at the same time, and can also call APIs related to local audio/video in the sub instance. But need to pay attention to:
    Audio needs to be collected by the microphone or custom data at the same time in all instances, and the result of API calls related to the audio device will be based on the last time;
    The result of camera-related API call will be based on the last time: startLocalPreview.

    Return Desc:

    TRTCCloud subinstance

    destroySubCloud:

    destroySubCloud:
    - (void)destroySubCloud:
    (TRTCCloud *)subCloud

    Terminate room subinstance

    Param
    DESC
    subCloud
    

    startPublishMediaStream:encoderParam:mixingConfig:

    startPublishMediaStream:encoderParam:mixingConfig:
    - (void)startPublishMediaStream:
    (TRTCPublishTarget*)target
    encoderParam:
    (nullable TRTCStreamEncoderParam*)param
    mixingConfig:
    (nullable TRTCStreamMixingConfig*)config

    Publish a stream

    After this API is called, the TRTC server will relay the stream of the local user to a CDN (after transcoding or without transcoding), or transcode and publish the stream to a TRTC room.
    You can use the TRTCPublishMode parameter in TRTCPublishTarget to specify the publishing mode.
    Param
    DESC
    config
    The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig.
    params
    The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we also recommend you set this parameter. For details, see TRTCStreamEncoderParam.
    target
    The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget.
    Note
    1. The SDK will send a task ID to you via the onStartPublishMediaStream callback.
    2. You can start a publishing task only once and cannot initiate two tasks that use the same publishing mode and publishing cdn url. Note the task ID returned, which you need to pass to updatePublishMediaStream to modify the publishing parameters or stopPublishMediaStream to stop the task.
    3. You can specify up to 10 CDN URLs in target . You will be charged only once for transcoding even if you relay to multiple CDNs.
    4. To avoid causing errors, do not specify the same URLs for different publishing tasks executed at the same time. We recommend you add "sdkappid_roomid_userid_main" to URLs to distinguish them from one another and avoid application conflicts.

    updatePublishMediaStream:publishTarget:encoderParam:mixingConfig:

    updatePublishMediaStream:publishTarget:encoderParam:mixingConfig:
    - (void)updatePublishMediaStream:
    (NSString *)taskId
    publishTarget:
    (TRTCPublishTarget*)target
    encoderParam:
    (nullable TRTCStreamEncoderParam*)param
    mixingConfig:
    (nullable TRTCStreamMixingConfig*)config

    Modify publishing parameters

    You can use this API to change the parameters of a publishing task initiated by startPublishMediaStream.
    Param
    DESC
    config
    The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig.
    params
    The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we recommend you set this parameter. For details, see TRTCStreamEncoderParam.
    target
    The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget.
    taskId
    The task ID returned to you via the onStartPublishMediaStream callback.
    Note
    1. You can use this API to add or remove CDN URLs to publish to (you can publish to up to 10 CDNs at a time). To avoid causing errors, do not specify the same URLs for different tasks executed at the same time.
    2. You can use this API to switch a relaying task to transcoding or vice versa. For example, in cross-room communication, you can first call startPublishMediaStream to relay to a CDN. When the anchor requests cross-room communication, call this API, passing in the task ID to switch the relaying task to a transcoding task. This can ensure that the live stream and CDN playback are not interrupted (you need to keep the encoding parameters consistent).
    3. You can not switch output between "only audio" 、 "only video" and "audio and video" for the same task.

    stopPublishMediaStream:

    stopPublishMediaStream:
    - (void)stopPublishMediaStream:
    (NSString *)taskId

    Stop publishing

    You can use this API to stop a task initiated by startPublishMediaStream.
    Param
    DESC
    taskId
    The task ID returned to you via the onStartPublishMediaStream callback.
    Note
    1. If the task ID is not saved to your backend, you can call startPublishMediaStream again when an anchor re-enters the room after abnormal exit. The publishing will fail, but the TRTC backend will return the task ID to you.
    2. If taskId is left empty, the TRTC backend will end all tasks you started through startPublishMediaStream. You can leave it empty if you have started only one task or want to stop all publishing tasks started by you.

    startLocalPreview:view:

    startLocalPreview:view:
    - (void)startLocalPreview:
    (BOOL)frontCamera
    view:
    (nullable TXView *)view

    Enable the preview image of local camera (mobile)

    If this API is called before enterRoom , the SDK will only enable the camera and wait until enterRoom is called before starting push.
    If it is called after enterRoom , the SDK will enable the camera and automatically start pushing the video stream.
    When the first camera video frame starts to be rendered, you will receive the onCameraDidReady callback in TRTCCloudDelegate.
    Param
    DESC
    frontCamera
    YES: front camera; NO: rear camera
    view
    Control that carries the video image
    Note
    If you want to preview the camera image and adjust the beauty filter parameters through BeautyManager before going live, you can:
    Scheme 1. Call startLocalPreview before calling enterRoom
    Scheme 2. Call startLocalPreview and muteLocalVideo(YES) after calling enterRoom

    startLocalPreview:

    startLocalPreview:
    - (void)startLocalPreview:
    (nullable TXView *)view

    Enable the preview image of local camera (desktop)

    Before this API is called, setCurrentCameraDevice can be called first to select whether to use the macOS device's built-in camera or an external camera.
    If this API is called before enterRoom , the SDK will only enable the camera and wait until enterRoom is called before starting push.
    If it is called after enterRoom , the SDK will enable the camera and automatically start pushing the video stream.
    When the first camera video frame starts to be rendered, you will receive the onCameraDidReady callback in TRTCCloudDelegate.
    Param
    DESC
    view
    Control that carries the video image
    Note
    If you want to preview the camera image and adjust the beauty filter parameters through BeautyManager before going live, you can:
    Scheme 1. Call startLocalPreview before calling enterRoom
    Scheme 2. Call startLocalPreview and muteLocalVideo(YES) after calling enterRoom

    updateLocalView:

    updateLocalView:
    - (void)updateLocalView:
    (nullable TXView *)view

    Update the preview image of local camera

    stopLocalPreview

    stopLocalPreview

    Stop camera preview

    muteLocalVideo:mute:

    muteLocalVideo:mute:
    - (void)muteLocalVideo:
    (TRTCVideoStreamType)streamType
    mute:
    (BOOL)mute

    Pause/Resume publishing local video stream

    This API can pause (or resume) publishing the local video image. After the pause, other users in the same room will not be able to see the local image.
    This API is equivalent to the two APIs of startLocalPreview/stopLocalPreview when TRTCVideoStreamTypeBig is specified, but has higher performance and response speed.
    The startLocalPreview/stopLocalPreview APIs need to enable/disable the camera, which are hardware device-related operations, so they are very time-consuming.
    In contrast, muteLocalVideo only needs to pause or allow the data stream at the software level, so it is more efficient and more suitable for scenarios where frequent enabling/disabling are needed.
    
    After local video publishing is paused, other members in the same room will receive the onUserVideoAvailable(userId, NO) callback notification.
    After local video publishing is resumed, other members in the same room will receive the onUserVideoAvailable(userId, YES) callback notification.
    Param
    DESC
    mute
    YES: pause; NO: resume
    streamType
    Specify for which video stream to pause (or resume). Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported

    setVideoMuteImage:fps:

    setVideoMuteImage:fps:
    - (void)setVideoMuteImage:
    (nullable TXImage *)image
    fps:
    (NSInteger)fps

    Set placeholder image during local video pause

    When you call muteLocalVideo(YES) to pause the local video image, you can set a placeholder image by calling this API. Then, other users in the room will see this image instead of a black screen.
    Param
    DESC
    fps
    Frame rate of the placeholder image. Minimum value: 5. Maximum value: 10. Default value: 5
    image
    Placeholder image. A null value means that no more video stream data will be sent after muteLocalVideo . The default value is null.

    startRemoteView:streamType:view:

    startRemoteView:streamType:view:
    - (void)startRemoteView:
    (NSString *)userId
    streamType:
    (TRTCVideoStreamType)streamType
    view:
    (nullable TXView *)view

    Subscribe to remote user's video stream and bind video rendering control

    Calling this API allows the SDK to pull the video stream of the specified userId and render it to the rendering control specified by the view parameter. You can set the display mode of the video image through setRemoteRenderParams.
    If you already know the userId of a user who has a video stream in the room, you can directly call startRemoteView to subscribe to the user's video image.
    If you don't know which users in the room are publishing video streams, you can wait for the notification from onUserVideoAvailable after enterRoom .
    
    Calling this API only starts pulling the video stream, and the image needs to be loaded and buffered at this time. After the buffering is completed, you will receive a notification from onFirstVideoFrame.
    Param
    DESC
    streamType
    Video stream type of the userId specified for watching:
    HD big image: TRTCVideoStreamTypeBig
    Smooth small image: TRTCVideoStreamTypeSmall (the remote user should enable dual-channel encoding through enableEncSmallVideoStream for this parameter to take effect)
    Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub
    userId
    ID of the specified remote user
    view
    Rendering control that carries the video image
    Note
    The following requires your attention:
    1. The SDK supports watching the big image and substream image or small image and substream image of a userId at the same time, but does not support watching the big image and small image at the same time.
    2. Only when the specified userId enables dual-channel encoding through enableEncSmallVideoStream can the user's small image be viewed.
    3. If the small image of the specified userId does not exist, the SDK will switch to the big image of the user by default.

    updateRemoteView:streamType:forUser:

    updateRemoteView:streamType:forUser:
    - (void)updateRemoteView:
    (nullable TXView *)view
    streamType:
    (TRTCVideoStreamType)streamType
    forUser:
    (NSString *)userId

    Update remote user's video rendering control

    This API can be used to update the rendering control of the remote video image. It is often used in interactive scenarios where the display area needs to be switched.
    Param
    DESC
    streamType
    Type of the stream for which to set the preview window (only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported)
    userId
    ID of the specified remote user
    view
    Control that carries the video image

    stopRemoteView:streamType:

    stopRemoteView:streamType:
    - (void)stopRemoteView:
    (NSString *)userId
    streamType:
    (TRTCVideoStreamType)streamType

    Stop subscribing to remote user's video stream and release rendering control

    Calling this API will cause the SDK to stop receiving the user's video stream and release the decoding and rendering resources for the stream.
    Param
    DESC
    streamType
    Video stream type of the userId specified for watching:
    HD big image: TRTCVideoStreamTypeBig
    Smooth small image: TRTCVideoStreamTypeSmall
    Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub
    userId
    ID of the specified remote user

    stopAllRemoteView

    stopAllRemoteView

    Stop subscribing to all remote users' video streams and release all rendering resources

    Calling this API will cause the SDK to stop receiving all remote video streams and release all decoding and rendering resources.
    Note
    If a substream image (screen sharing) is being displayed, it will also be stopped.

    muteRemoteVideoStream:streamType:mute:

    muteRemoteVideoStream:streamType:mute:
    - (void)muteRemoteVideoStream:
    (NSString*)userId
    streamType:
    (TRTCVideoStreamType)streamType
    mute:
    (BOOL)mute

    Pause/Resume subscribing to remote user's video stream

    This API only pauses/resumes receiving the specified user's video stream but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called.
    Param
    DESC
    mute
    Whether to pause receiving
    streamType
    Specify for which video stream to pause (or resume):
    HD big image: TRTCVideoStreamTypeBig
    Smooth small image: TRTCVideoStreamTypeSmall
    Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub
    userId
    ID of the specified remote user
    Note
    This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).
    After calling this API to pause receiving the video stream from a specific user, simply calling the startRemoteView API will not be able to play the video from that user. You need to call muteRemoteVideoStream(NO) or muteAllRemoteVideoStreams(NO) to resume it.

    muteAllRemoteVideoStreams:

    muteAllRemoteVideoStreams:
    - (void)muteAllRemoteVideoStreams:
    (BOOL)mute

    Pause/Resume subscribing to all remote users' video streams

    This API only pauses/resumes receiving all users' video streams but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called.
    Param
    DESC
    mute
    Whether to pause receiving
    Note
    This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).
    After calling this interface to pause receiving video streams from all users, simply calling the startRemoteView interface will not be able to play the video from a specific user. You need to call muteRemoteVideoStream(NO) or muteAllRemoteVideoStreams(NO) to resume it.

    setVideoEncoderParam:

    setVideoEncoderParam:
    - (void)setVideoEncoderParam:
    (TRTCVideoEncParam*)param

    Set the encoding parameters of video encoder

    This setting can determine the quality of image viewed by remote users, which is also the image quality of on-cloud recording files.
    Param
    DESC
    param
    It is used to set relevant parameters for the video encoder. For more information, please see TRTCVideoEncParam.
    Note
    Begin from v11.5 version, the encoding output resolution will be aligned according to width 8 and height 2 bytes, and will be adjusted downward, eg: input resolution 540x960, actual encoding output resolution 536x960.

    setNetworkQosParam:

    setNetworkQosParam:
    - (void)setNetworkQosParam:
    (TRTCNetworkQosParam*)param

    Set network quality control parameters

    This setting determines the quality control policy in a poor network environment, such as "image quality preferred" or "smoothness preferred".
    Param
    DESC
    param
    It is used to set relevant parameters for network quality control. For details, please refer to TRTCNetworkQosParam.

    setLocalRenderParams:

    setLocalRenderParams:
    - (void)setLocalRenderParams:
    (TRTCRenderParams *)params

    Set the rendering parameters of local video image

    The parameters that can be set include video image rotation angle, fill mode, and mirror mode.
    Param
    DESC
    params
    Video image rendering parameters. For more information, please see TRTCRenderParams.

    setRemoteRenderParams:streamType:params:

    setRemoteRenderParams:streamType:params:
    - (void)setRemoteRenderParams:
    (NSString *)userId
    streamType:
    (TRTCVideoStreamType)streamType
    params:
    (TRTCRenderParams *)params

    Set the rendering mode of remote video image

    The parameters that can be set include video image rotation angle, fill mode, and mirror mode.
    Param
    DESC
    params
    Video image rendering parameters. For more information, please see TRTCRenderParams.
    streamType
    It can be set to the primary stream image (TRTCVideoStreamTypeBig) or substream image (TRTCVideoStreamTypeSub).
    userId
    ID of the specified remote user

    enableEncSmallVideoStream:withQuality:

    enableEncSmallVideoStream:withQuality:
    - (int)enableEncSmallVideoStream:
    (BOOL)enable
    withQuality:
    (TRTCVideoEncParam*)smallVideoEncParam

    Enable dual-channel encoding mode with big and small images

    In this mode, the current user's encoder will output two channels of video streams, i.e., HD big image and Smooth small image, at the same time (only one channel of audio stream will be output though).
    In this way, other users in the room can choose to subscribe to the HD big image or Smooth small image according to their own network conditions or screen size.
    Param
    DESC
    enable
    Whether to enable small image encoding. Default value: NO
    smallVideoEncParam
    Video parameters of small image stream
    Note
    Dual-channel encoding will consume more CPU resources and network bandwidth; therefore, this feature can be enabled on macOS, Windows, or high-spec tablets, but is not recommended for phones.

    Return Desc:

    0: success; -1: the current big image has been set to a lower quality, and it is not necessary to enable dual-channel encoding

    setRemoteVideoStreamType:type:

    setRemoteVideoStreamType:type:
    - (void)setRemoteVideoStreamType:
    (NSString*)userId
    type:
    (TRTCVideoStreamType)streamType

    Switch the big/small image of specified remote user

    After an anchor in a room enables dual-channel encoding, the video image that other users in the room subscribe to through startRemoteView will be HD big image by default.
    You can use this API to select whether the image subscribed to is the big image or small image. The API can take effect before or after startRemoteView is called.
    Param
    DESC
    streamType
    Video stream type, i.e., big image or small image. Default value: big image
    userId
    ID of the specified remote user
    Note
    To implement this feature, the target user must have enabled the dual-channel encoding mode through enableEncSmallVideoStream; otherwise, this API will not work.

    snapshotVideo:type:sourceType:

    snapshotVideo:type:sourceType:
    - (void)snapshotVideo:
    (nullable NSString *)userId
    type:
    (TRTCVideoStreamType)streamType
    sourceType:
    (TRTCSnapshotSourceType)sourceType

    Screencapture video

    You can use this API to screencapture the local video image or the primary stream image and substream (screen sharing) image of a remote user.
    Param
    DESC
    sourceType
    Video image source, which can be the video stream image (TRTCSnapshotSourceTypeStream, generally in higher definition) 、the video rendering image (TRTCSnapshotSourceTypeView) or the capture picture (TRTCSnapshotSourceTypeCapture).The captured picture screenshot will be clearer.
    streamType
    Video stream type, which can be the primary stream image (TRTCVideoStreamTypeBig, generally for camera) or substream image (TRTCVideoStreamTypeSub, generally for screen sharing)
    userId
    User ID. A null value indicates to screencapture the local video.
    Note
    On Windows, only video image from the TRTCSnapshotSourceTypeStream source can be screencaptured currently.

    setPerspectiveCorrectionWithUser:srcPoints:dstPoints:

    setPerspectiveCorrectionWithUser:srcPoints:dstPoints:
    - (void)setPerspectiveCorrectionWithUser:
    (nullable NSString *)userId
    srcPoints:
    (nullable NSArray *)srcPoints
    dstPoints:
    (nullable NSArray *)dstPoints

    Sets perspective correction coordinate points.

    This function allows you to specify coordinate areas for perspective correction.
    Param
    DESC
    dstPoints
    The coordinates of the four vertices of the target corrected area should be passed in the order of top-left, bottom-left, top-right, bottom-right. All coordinates need to be normalized to the [0,1] range based on the render view width and height, or null to stop perspective correction of the corresponding stream.
    srcPoints
    The coordinates of the four vertices of the original stream image area should be passed in the order of top-left, bottom-left, top-right, bottom-right. All coordinates need to be normalized to the [0,1] range based on the render view width and height, or null to stop perspective correction of the corresponding stream.
    userId
    userId which corresponding to the target stream. If null value is specified, it indicates that the function is applied to the local stream.

    setGravitySensorAdaptiveMode:

    setGravitySensorAdaptiveMode:
    - (void)setGravitySensorAdaptiveMode:

    Set the adaptation mode of gravity sensing (version 11.7 and above)

    After turning on gravity sensing, if the device on the collection end rotates, the images on the collection end and the audience will be rendered accordingly to ensure that the image in the field of view is always facing up.
    It only takes effect in the camera capture scene inside the SDK, and only takes effect on the mobile terminal.
    1. This interface only works for the collection end. If you only watch the picture in the room, opening this interface is invalid.
    2. When the capture device is rotated 90 degrees or 270 degrees, the picture seen by the capture device or the audience may be cropped to maintain proportional coordination.

    startLocalAudio:

    startLocalAudio:
    - (void)startLocalAudio:
    (TRTCAudioQuality)quality

    Enable local audio capturing and publishing

    The SDK does not enable the mic by default. When a user wants to publish the local audio, the user needs to call this API to enable mic capturing and encode and publish the audio to the current room.
    After local audio capturing and publishing is enabled, other users in the room will receive the onUserAudioAvailable(userId, YES) notification.
    Param
    DESC
    quality
    Sound quality
    TRTCAudioQualitySpeech - Smooth: sample rate: 16 kHz; mono channel; audio bitrate: 16 Kbps. This is suitable for audio call scenarios, such as online meeting and audio call.
    TRTCAudioQualityDefault - Default: sample rate: 48 kHz; mono channel; audio bitrate: 50 Kbps. This is the default sound quality of the SDK and recommended if there are no special requirements.
    TRTCAudioQualityMusic - HD: sample rate: 48 kHz; dual channel + full band; audio bitrate: 128 Kbps. This is suitable for scenarios where Hi-Fi music transfer is required, such as online karaoke and music live streaming.
    Note
    This API will check the mic permission. If the current application does not have permission to use the mic, the SDK will automatically ask the user to grant the mic permission.

    stopLocalAudio

    stopLocalAudio

    Stop local audio capturing and publishing

    After local audio capturing and publishing is stopped, other users in the room will receive the onUserAudioAvailable(userId, NO) notification.

    muteLocalAudio:

    muteLocalAudio:
    - (void)muteLocalAudio:
    (BOOL)mute

    Pause/Resume publishing local audio stream

    After local audio publishing is paused, other users in the room will receive the onUserAudioAvailable(userId, NO) notification.
    After local audio publishing is resumed, other users in the room will receive the onUserAudioAvailable(userId, YES) notification.
    
    Different from stopLocalAudio, muteLocalAudio(YES) does not release the mic permission; instead, it continues to send mute packets with extremely low bitrate.
    This is very suitable for scenarios that require on-cloud recording, as video file formats such as MP4 have a high requirement for audio continuity, while an MP4 recording file cannot be played back smoothly if stopLocalAudio is used.
    Therefore, muteLocalAudio instead of stopLocalAudio is recommended in scenarios where the requirement for recording file quality is high.
    Param
    DESC
    mute
    YES: mute; NO: unmute

    muteRemoteAudio:mute:

    muteRemoteAudio:mute:
    - (void)muteRemoteAudio:
    (NSString *)userId
    mute:
    (BOOL)mute

    Pause/Resume playing back remote audio stream

    When you mute the remote audio of a specified user, the SDK will stop playing back the user's audio and pulling the user's audio data.
    Param
    DESC
    mute
    YES: mute; NO: unmute
    userId
    ID of the specified remote user
    Note
    This API works when called either before or after room entry (enterRoom), and the mute status will be reset to NO after room exit (exitRoom).

    muteAllRemoteAudio:

    muteAllRemoteAudio:
    - (void)muteAllRemoteAudio:
    (BOOL)mute

    Pause/Resume playing back all remote users' audio streams

    When you mute the audio of all remote users, the SDK will stop playing back all their audio streams and pulling all their audio data.
    Param
    DESC
    mute
    YES: mute; NO: unmute
    Note
    This API works when called either before or after room entry (enterRoom), and the mute status will be reset to NO after room exit (exitRoom).

    setAudioRoute:

    setAudioRoute:
    - (void)setAudioRoute:
    (TRTCAudioRoute)route

    Set audio route

    Setting "audio route" is to determine whether the sound is played back from the speaker or receiver of a mobile device; therefore, this API is only applicable to mobile devices such as phones.
    
    Generally, a phone has two speakers: one is the receiver at the top, and the other is the stereo speaker at the bottom.
    If audio route is set to the receiver, the volume is relatively low, and the sound can be heard clearly only when the phone is put near the ear. This mode has a high level of privacy and is suitable for answering calls.
    If audio route is set to the speaker, the volume is relatively high, so there is no need to put the phone near the ear. Therefore, this mode can implement the "hands-free" feature.
    Param
    DESC
    route
    Audio route, i.e., whether the audio is output by speaker or receiver. Default value: TRTCAudioModeSpeakerphone

    setRemoteAudioVolume:volume:

    setRemoteAudioVolume:volume:
    - (void)setRemoteAudioVolume:
    (NSString *)userId
    volume:
    (int)volume

    Set the audio playback volume of remote user

    You can mute the audio of a remote user through setRemoteAudioVolume(userId, 0) .
    Param
    DESC
    userId
    ID of the specified remote user
    volume
    Volume. 100 is the original volume. Value range: [0,150]. Default value: 100
    Note
    If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

    setAudioCaptureVolume:

    setAudioCaptureVolume:
    - (void)setAudioCaptureVolume:
    (NSInteger)volume

    Set the capturing volume of local audio

    Param
    DESC
    volume
    Volume. 100 is the original volume. Value range: [0,150]. Default value: 100
    Note
    If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

    getAudioCaptureVolume

    getAudioCaptureVolume

    Get the capturing volume of local audio

    setAudioPlayoutVolume:

    setAudioPlayoutVolume:
    - (void)setAudioPlayoutVolume:
    (NSInteger)volume

    Set the playback volume of remote audio

    This API controls the volume of the sound ultimately delivered by the SDK to the system for playback. It affects the volume of the recorded local audio file but not the volume of in-ear monitoring.
    Param
    DESC
    volume
    Volume. 100 is the original volume. Value range: [0,150]. Default value: 100
    Note
    If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

    getAudioPlayoutVolume

    getAudioPlayoutVolume

    Get the playback volume of remote audio

    enableAudioVolumeEvaluation:withParams:

    enableAudioVolumeEvaluation:withParams:
    - (void)enableAudioVolumeEvaluation:
    (BOOL)enable
    withParams:

    Enable volume reminder

    After this feature is enabled, the SDK will return the audio volume assessment information of local user who sends stream and remote users in the onUserVoiceVolume callback of TRTCCloudDelegate.
    Param
    DESC
    enable
    Whether to enable the volume prompt. It’s disabled by default.
    params
    Volume evaluation and other related parameters, please see TRTCAudioVolumeEvaluateParams
    Note
    To enable this feature, call this API before calling startLocalAudio .

    startAudioRecording:

    startAudioRecording:
    - (int)startAudioRecording:

    Start audio recording

    After you call this API, the SDK will selectively record local and remote audio streams (such as local audio, remote audio, background music, and sound effects) into a local file.
    This API works when called either before or after room entry. If a recording task has not been stopped through stopAudioRecording before room exit, it will be automatically stopped after room exit.
    The startup and completion status of the recording will be notified through local recording-related callbacks. See TRTCCloud related callbacks for reference.
    Param
    DESC
    param
    Recording parameter. For more information, please see TRTCAudioRecordingParams
    Note
    Since version 11.5, the results of audio recording have been changed to be notified through asynchronous callbacks instead of return values. Please refer to the relevant callbacks of TRTCCloud.

    Return Desc:

    0: success; -1: audio recording has been started; -2: failed to create file or directory; -3: the audio format of the specified file extension is not supported.

    stopAudioRecording

    stopAudioRecording

    Stop audio recording

    If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.

    startLocalRecording:

    startLocalRecording:
    - (void)startLocalRecording:
    (TRTCLocalRecordingParams *)params

    Start local media recording

    This API records the audio/video content during live streaming into a local file.
    Param
    DESC
    params
    Recording parameter. For more information, please see TRTCLocalRecordingParams

    stopLocalRecording

    stopLocalRecording

    Stop local media recording

    If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.

    setRemoteAudioParallelParams:

    setRemoteAudioParallelParams:
    - (void)setRemoteAudioParallelParams:
    (TRTCAudioParallelParams*)params

    Set the parallel strategy of remote audio streams

    For room with many speakers.
    Param
    DESC
    params
    Audio parallel parameter. For more information, please see TRTCAudioParallelParams

    enable3DSpatialAudioEffect:

    enable3DSpatialAudioEffect:
    - (void)enable3DSpatialAudioEffect:
    (BOOL)enabled

    Enable 3D spatial effect

    Enable 3D spatial effect. Note that TRTCAudioQualitySpeech smooth or TRTCAudioQualityDefault default audio quality should be used.
    Param
    DESC
    enabled
    Whether to enable 3D spatial effect. It’s disabled by default.

    updateSelf3DSpatialPosition

    updateSelf3DSpatialPosition

    Update self position and orientation for 3D spatial effect

    Update self position and orientation in the world coordinate system. The SDK will calculate the relative position between self and the remote users according to the parameters of this method, and then render the spatial sound effect. Note that the length of array should be 3.
    Param
    DESC
    axisForward
    The unit vector of the forward axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn.
    axisRight
    The unit vector of the right axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn.
    axisUp
    The unit vector of the up axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn.
    position
    The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn.
    Note
    Please limit the calling frequency appropriately. It's recommended that the interval between two operations be at least 100ms.

    updateRemote3DSpatialPosition:

    updateRemote3DSpatialPosition:
    - (void)updateRemote3DSpatialPosition:
    (NSString *)userId

    Update the specified remote user's position for 3D spatial effect

    Update the specified remote user's position in the world coordinate system. The SDK will calculate the relative position between self and the remote users according to the parameters of this method, and then render the spatial sound effect. Note that the length of array should be 3.
    Param
    DESC
    position
    The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn.
    userId
    ID of the specified remote user.
    Note
    Please limit the calling frequency appropriately. It's recommended that the interval between two operations of the same remote user be at least 100ms.

    set3DSpatialReceivingRange:range:

    set3DSpatialReceivingRange:range:
    - (void)set3DSpatialReceivingRange:
    (NSString *)userId
    range:
    (NSInteger)range

    Set the maximum 3D spatial attenuation range for userId's audio stream

    After set the range, the specified user's audio stream will attenuate to zero within the range.
    Param
    DESC
    range
    Maximum attenuation range of the audio stream.
    userId
    ID of the specified user.

    getDeviceManager

    getDeviceManager

    Get device management class (TXDeviceManager)

    getBeautyManager

    getBeautyManager

    Get beauty filter management class (TXBeautyManager)

    You can use the following features with beauty filter management:
    Set beauty effects such as "skin smoothing", "brightening", and "rosy skin".
    Set face adjustment effects such as "eye enlarging", "face slimming", "chin slimming", "chin lengthening/shortening", "face shortening", "nose narrowing", "eye brightening", "teeth whitening", "eye bag removal", "wrinkle removal", and "smile line removal".
    Set face adjustment effects such as "hairline", "eye distance", "eye corners", "mouth shape", "nose wing", "nose position", "lip thickness", and "face shape".
    Set makeup effects such as "eye shadow" and "blush".
    Set animated effects such as animated sticker and facial pendant.

    setWatermark:streamType:rect:

    setWatermark:streamType:rect:
    - (void)setWatermark:
    (nullable TXImage*)image
    streamType:
    (TRTCVideoStreamType)streamType
    rect:
    (CGRect)rect

    Add watermark

    The watermark position is determined by the rect parameter, which is a quadruple in the format of (x, y, width, height).
    x: X coordinate of watermark, which is a floating-point number between 0 and 1.
    y: Y coordinate of watermark, which is a floating-point number between 0 and 1.
    width: width of watermark, which is a floating-point number between 0 and 1.
    height: it does not need to be set. The SDK will automatically calculate it according to the watermark image's aspect ratio.
    
    Sample parameter:
    If the encoding resolution of the current video is 540x960, and the rect parameter is set to (0.1, 0.1, 0.2, 0.0),
    then the coordinates of the top-left point of the watermark will be (540 * 0.1, 960 * 0.1), i.e., (54, 96), the watermark width will be 540 * 0.2 = 108 px, and the watermark height will be calculated automatically by the SDK based on the watermark image's aspect ratio.
    Param
    DESC
    image
    Watermark image, which must be a PNG image with transparent background
    rect
    Unified coordinates of the watermark relative to the encoded resolution. Value range of x , y , width , and height : 0–1.
    streamType
    Specify for which image to set the watermark. For more information, please see TRTCVideoStreamType.
    Note
    If you want to set watermarks for both the primary image (generally for the camera) and the substream image (generally for screen sharing), you need to call this API twice with streamType set to different values.

    getAudioEffectManager

    getAudioEffectManager

    Get sound effect management class (TXAudioEffectManager)

    TXAudioEffectManager is a sound effect management API, through which you can implement the following features:
    Background music: both online music and local music can be played back with various features such as speed adjustment, pitch adjustment, original voice, accompaniment, and loop.
    In-ear monitoring: the sound captured by the mic is played back in the headphones in real time, which is generally used for music live streaming.
    Reverb effect: karaoke room, small room, big hall, deep, resonant, and other effects.
    Voice changing effect: young girl, middle-aged man, heavy metal, and other effects.
    Short sound effect: short sound effect files such as applause and laughter are supported (for files less than 10 seconds in length, please set the isShortFile parameter to YES ).

    startSystemAudioLoopback

    startSystemAudioLoopback

    Enable system audio capturing(iOS not supported)

    This API captures audio data from the sound card of a macOS computer and mixes it into the current audio data stream of the SDK, so that other users in the room can also hear the sound played back on the current macOS system.
    In use cases such as video teaching or music live streaming, the teacher can use this feature to let the SDK capture the sound in the video played back by the teacher, so that students in the same room can also hear the sound in the video.
    Note
    1. This feature needs to install a virtual audio device plugin on the user's macOS system. After the installation is completed, the SDK will capture sound from the installed virtual device.
    2. The SDK will automatically download the appropriate plugin from the internet for installation, but the download may be slow. If you want to speed up this process, you can package the virtual audio plugin file into the Resources directory of your app bundle.

    stopSystemAudioLoopback

    stopSystemAudioLoopback

    Stop system audio capturing(iOS not supported)

    setSystemAudioLoopbackVolume:

    setSystemAudioLoopbackVolume:
    - (void)setSystemAudioLoopbackVolume:
    (uint32_t)volume

    Set the volume of system audio capturing

    Param
    DESC
    volume
    Set volume. Value range: [0, 150]. Default value: 100

    startScreenCaptureInApp:encParam:

    startScreenCaptureInApp:encParam:
    - (void)startScreenCaptureInApp:
    (TRTCVideoStreamType)streamType
    encParam:
    (TRTCVideoEncParam *)encParams

    Start in-app screen sharing (for iOS 13.0 and above only)

    This API captures the real-time screen content of the current application and shares it with other users in the same room. It is applicable to iOS 13.0 and above.
    If you want to capture the screen content of the entire iOS system (instead of the current application), we recommend you use startScreenCaptureByReplaykit.
    
    Video encoding parameters recommended for screen sharing on iPhone (TRTCVideoEncParam):
    Resolution (videoResolution): 1280x720
    Frame rate (videoFps): 10 fps
    Bitrate (videoBitrate): 1600 Kbps
    Resolution adaption (enableAdjustRes): NO
    Param
    DESC
    encParams
    Video encoding parameters for screen sharing. We recommend you use the above configuration.If you set encParams to nil , the SDK will use the video encoding parameters you set before calling the startScreenCapture API.
    streamType
    Channel used for screen sharing, which can be the primary stream (TRTCVideoStreamTypeBig) or substream (TRTCVideoStreamTypeSub).

    startScreenCaptureByReplaykit:encParam:appGroup:

    startScreenCaptureByReplaykit:encParam:appGroup:
    - (void)startScreenCaptureByReplaykit:
    (TRTCVideoStreamType)streamType
    encParam:
    (TRTCVideoEncParam *)encParams
    appGroup:
    (NSString *)appGroup

    Start system-level screen sharing (for iOS 11.0 and above only)

    This API supports capturing the screen of the entire iOS system, which can implement system-wide screen sharing similar to VooV Meeting.
    However, the integration steps are slightly more complicated than those of startScreenCaptureInApp. You need to implement a ReplayKit extension module for your application.
    
    For more information, please see iOS
    
    Video encoding parameters recommended for screen sharing on iPhone (TRTCVideoEncParam):
    Resolution (videoResolution): 1280x720
    Frame rate (videoFps): 10 fps
    Bitrate (videoBitrate): 1600 Kbps
    Resolution adaption (enableAdjustRes): NO
    Param
    DESC
    appGroup
    Specify the Application Group Identifier shared by your application and the screen sharing process. You can specify this parameter as nil , but we recommend you set it as instructed in the documentation for higher reliability.
    encParams
    Video encoding parameters for screen sharing. We recommend you use the above configuration.
    If you set encParams to nil , the SDK will use the video encoding parameters you set before calling the startScreenCapture API.
    streamType
    Channel used for screen sharing, which can be the primary stream (TRTCVideoStreamTypeBig) or substream (TRTCVideoStreamTypeSub).

    startScreenCapture:streamType:encParam:

    startScreenCapture:streamType:encParam:
    - (void)startScreenCapture:
    (nullable NSView *)view
    streamType:
    (TRTCVideoStreamType)streamType
    encParam:
    (nullable TRTCVideoEncParam *)encParam

    Start screen sharing

    This API can capture the content of the entire screen or a specified application and share it with other users in the same room.
    Param
    DESC
    encParam
    Image encoding parameters used for screen sharing, which can be set to empty, indicating to let the SDK choose the optimal encoding parameters (such as resolution and bitrate).
    streamType
    Channel used for screen sharing, which can be the primary stream (TRTCVideoStreamTypeBig) or substream (TRTCVideoStreamTypeSub).
    view
    Parent control of the rendering control, which can be set to a null value, indicating not to display the preview of the shared screen.
    Note
    1. A user can publish at most one primary stream (TRTCVideoStreamTypeBig) and one substream (TRTCVideoStreamTypeSub) at the same time.
    2. By default, screen sharing uses the substream image. If you want to use the primary stream for screen sharing, you need to stop camera capturing (through stopLocalPreview) in advance to avoid conflicts.
    3. Only one user can use the substream for screen sharing in the same room at any time; that is, only one user is allowed to enable the substream in the same room at any time.
    4. When there is already a user in the room using the substream for screen sharing, calling this API will return the onError(ERR_SERVER_CENTER_ANOTHER_USER_PUSH_SUB_VIDEO) callback from TRTCCloudDelegate.

    stopScreenCapture

    stopScreenCapture

    Stop screen sharing

    pauseScreenCapture

    pauseScreenCapture

    Pause screen sharing

    Note
    Begin from v11.5 version, paused screen capture will use the last frame to output at a frame rate of 1fps.

    resumeScreenCapture

    resumeScreenCapture

    Resume screen sharing

    getScreenCaptureSourcesWithThumbnailSize:iconSize:

    getScreenCaptureSourcesWithThumbnailSize:iconSize:
    - (NSArray<TRTCScreenCaptureSourceInfo*>*)getScreenCaptureSourcesWithThumbnailSize:
    (CGSize)thumbnailSize
    iconSize:
    (CGSize)iconSize

    Enumerate shareable screens and windows (for macOS only)

    When you integrate the screen sharing feature of a desktop system, you generally need to display a UI for selecting the sharing target, so that users can use the UI to choose whether to share the entire screen or a certain window.
    Through this API, you can query the IDs, names, and thumbnails of sharable windows on the current system. We provide a default UI implementation in the demo for your reference.
    Param
    DESC
    iconSize
    Specify the icon size of the window to be obtained.
    thumbnailSize
    Specify the thumbnail size of the window to be obtained. The thumbnail can be drawn on the window selection UI.
    Note
    The returned list contains the screen and the application windows. The screen is the first element in the list. If the user has multiple displays, then each display is a sharing target.

    Return Desc:

    List of windows (including the screen)

    selectScreenCaptureTarget:rect:capturesCursor:highlight:

    selectScreenCaptureTarget:rect:capturesCursor:highlight:
    - (void)selectScreenCaptureTarget:
    (TRTCScreenCaptureSourceInfo *)screenSource
    rect:
    (CGRect)rect
    capturesCursor:
    (BOOL)capturesCursor
    highlight:
    (BOOL)highlight

    Select the screen or window to share (for macOS only)

    After you get the sharable screen and windows through getScreenCaptureSources , you can call this API to select the target screen or window you want to share.
    During the screen sharing process, you can also call this API at any time to switch the sharing target.
    Param
    DESC
    capturesCursor
    Whether to capture mouse cursor
    highlight
    Whether to highlight the window being shared
    rect
    Specify the area to be captured (set this parameter to CGRectZero : when the sharing target is a window, the entire window will be shared, and when the sharing target is the desktop, the entire desktop will be shared)
    screenSource
    Specify sharing source

    setSubStreamEncoderParam:

    setSubStreamEncoderParam:
    - (void)setSubStreamEncoderParam:
    (TRTCVideoEncParam *)param

    Set the video encoding parameters of screen sharing (i.e., substream) (for desktop and mobile systems)

    This API can set the image quality of screen sharing (i.e., the substream) viewed by remote users, which is also the image quality of screen sharing in on-cloud recording files.
    Please note the differences between the following two APIs:
    setVideoEncoderParam is used to set the video encoding parameters of the primary stream image (TRTCVideoStreamTypeBig, generally for camera).
    setSubStreamEncoderParam is used to set the video encoding parameters of the substream image (TRTCVideoStreamTypeSub, generally for screen sharing).
    Param
    DESC
    param
    Substream encoding parameters. For more information, please see TRTCVideoEncParam.

    setSubStreamMixVolume:

    setSubStreamMixVolume:
    - (void)setSubStreamMixVolume:
    (NSInteger)volume

    Set the audio mixing volume of screen sharing (for desktop systems only)

    The greater the value, the larger the ratio of the screen sharing volume to the mic volume. We recommend you not set a high value for this parameter as a high volume will cover the mic sound.
    Param
    DESC
    volume
    Set audio mixing volume. Value range: 0–100

    addExcludedShareWindow:

    addExcludedShareWindow:
    - (void)addExcludedShareWindow:
    (NSInteger)windowID

    Add specified windows to the exclusion list of screen sharing (for desktop systems only)

    The excluded windows will not be shared. This feature is generally used to add a certain application's window to the exclusion list to avoid privacy issues.
    You can set the filtered windows before starting screen sharing or dynamically add the filtered windows during screen sharing.
    Param
    DESC
    window
    Window not to be shared
    Note
    1. This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeScreen; that is, the feature of excluding specified windows works only when the entire screen is shared.
    2. The windows added to the exclusion list through this API will be automatically cleared by the SDK after room exit.
    3. On macOS, please pass in the window ID (CGWindowID), which can be obtained through the sourceId member in TRTCScreenCaptureSourceInfo.

    removeExcludedShareWindow:

    removeExcludedShareWindow:
    - (void)removeExcludedShareWindow:
    (NSInteger)windowID

    Remove specified windows from the exclusion list of screen sharing (for desktop systems only)

    Param
    DESC
    windowID
    

    removeAllExcludedShareWindows

    removeAllExcludedShareWindows

    Remove all windows from the exclusion list of screen sharing (for desktop systems only)

    addIncludedShareWindow:

    addIncludedShareWindow:
    - (void)addIncludedShareWindow:
    (NSInteger)windowID

    Add specified windows to the inclusion list of screen sharing (for desktop systems only)

    This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow; that is, the feature of additionally including specified windows works only when a window is shared.
    You can call it before or after startScreenCapture.
    Param
    DESC
    windowID
    Window to be shared (which is a window handle HWND on Windows)
    Note
    The windows added to the inclusion list by this method will be automatically cleared by the SDK after room exit.

    removeIncludedShareWindow:

    removeIncludedShareWindow:
    - (void)removeIncludedShareWindow:
    (NSInteger)windowID

    Remove specified windows from the inclusion list of screen sharing (for desktop systems only)

    This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow.
    That is, the feature of additionally including specified windows works only when a window is shared.
    Param
    DESC
    windowID
    Window to be shared (window ID on macOS or HWND on Windows)

    removeAllIncludedShareWindows

    removeAllIncludedShareWindows

    Remove all windows from the inclusion list of screen sharing (for desktop systems only)

    This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow.
    That is, the feature of additionally including specified windows works only when a window is shared.

    enableCustomVideoCapture:enable:

    enableCustomVideoCapture:enable:
    - (void)enableCustomVideoCapture:
    (TRTCVideoStreamType)streamType
    enable:
    (BOOL)enable

    Enable/Disable custom video capturing mode

    After this mode is enabled, the SDK will not run the original video capturing process (i.e., stopping camera data capturing and beauty filter operations) and will retain only the video encoding and sending capabilities.
    You need to use sendCustomVideoData to continuously insert the captured video image into the SDK.
    Param
    DESC
    enable
    Whether to enable. Default value: NO
    streamType
    Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image).

    sendCustomVideoData:frame:

    sendCustomVideoData:frame:
    - (void)sendCustomVideoData:
    (TRTCVideoStreamType)streamType
    frame:
    (TRTCVideoFrame *)frame

    Deliver captured video frames to SDK

    You can use this API to deliver video frames you capture to the SDK, and the SDK will encode and transfer them through its own network module.
    
    We recommend you enter the following information for the TRTCVideoFrame parameter (other fields can be left empty):
    pixelFormat: TRTCVideoPixelFormat_NV12 is recommended.
    bufferType: TRTCVideoBufferType_PixelBuffer is recommended.
    pixelBuffer: common video data format on iOS/macOS.
    data: raw video data format, which is used if bufferType is NSData .
    timestamp (ms): Set it to the timestamp when video frames are captured, which you can obtain by calling generateCustomPTS after getting a video frame.
    width: video image length, which needs to be set if bufferType is NSData .
    height: video image width, which needs to be set if bufferType is NSData .
    
    For more information, please see Custom Capturing and Rendering.
    Param
    DESC
    frame
    Video data, which can be in PixelBuffer NV12, BGRA, or I420 format.
    streamType
    Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image).
    Note
    1. We recommend you call the generateCustomPTS API to get the timestamp value of a video frame immediately after capturing it, so as to achieve the best audio/video sync effect.
    2. The video frame rate eventually encoded by the SDK is not determined by the frequency at which you call this API, but by the FPS you set in setVideoEncoderParam.
    3. Please try to keep the calling interval of this API even; otherwise, problems will occur, such as unstable output frame rate of the encoder or out-of-sync audio/video.

    enableCustomAudioCapture:

    enableCustomAudioCapture:
    - (void)enableCustomAudioCapture:
    (BOOL)enable

    Enable custom audio capturing mode

    After this mode is enabled, the SDK will not run the original audio capturing process (i.e., stopping mic data capturing) and will retain only the audio encoding and sending capabilities.
    You need to use sendCustomAudioData to continuously insert the captured audio data into the SDK.
    Param
    DESC
    enable
    Whether to enable. Default value: NO
    Note
    As acoustic echo cancellation (AEC) requires strict control over the audio capturing and playback time, after custom audio capturing is enabled, AEC may fail.

    sendCustomAudioData:

    sendCustomAudioData:
    - (void)sendCustomAudioData:
    (TRTCAudioFrame *)frame

    Deliver captured audio data to SDK

    We recommend you enter the following information for the TRTCAudioFrame parameter (other fields can be left empty):
    audioFormat: audio data format, which can only be TRTCAudioFrameFormatPCM .
    data: audio frame buffer. Audio frame data must be in PCM format, and it supports a frame length of 5–100 ms (20 ms is recommended). Length calculation method: for example, if the sample rate is 48000, then the frame length for mono channel will be `48000 * 0.02s * 1 * 16 bit = 15360 bit = 1920 bytes`.
    sampleRate: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000.
    channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel.
    timestamp (ms): Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting a audio frame.
    
    For more information, please see Custom Capturing and Rendering.
    Param
    DESC
    frame
    Audio data
    Note
    Please call this API accurately at intervals of the frame length; otherwise, sound lag may occur due to uneven data delivery intervals.

    enableMixExternalAudioFrame:playout:

    enableMixExternalAudioFrame:playout:
    - (void)enableMixExternalAudioFrame:
    (BOOL)enablePublish
    playout:
    (BOOL)enablePlayout

    Enable/Disable custom audio track

    After this feature is enabled, you can mix a custom audio track into the SDK through this API. With two boolean parameters, you can control whether to play back this track remotely or locally.
    Param
    DESC
    enablePlayout
    Whether the mixed audio track should be played back locally. Default value: NO
    enablePublish
    Whether the mixed audio track should be played back remotely. Default value: NO
    Note
    If you specify both enablePublish and enablePlayout as NO , the custom audio track will be completely closed.

    mixExternalAudioFrame:

    mixExternalAudioFrame:
    - (int)mixExternalAudioFrame:
    (TRTCAudioFrame *)frame

    Mix custom audio track into SDK

    Before you use this API to mix custom PCM audio into the SDK, you need to first enable custom audio tracks through enableMixExternalAudioFrame.
    You are expected to feed audio data into the SDK at an even pace, but we understand that it can be challenging to call an API at absolutely regular intervals.</