Skip to main content
Version: 0.1.x

Methods returned by useMeeting Hook - React

join()

  • It is used to join a meeting.
  • During initialization using the <MeetingProvider>, if joinWithoutUserInteraction is set to true, participant will automatically join the meeting. If it is false explicity call for join() should be made.

Events associated with join():

  • Local Participant will receive a onMeetingJoined event, when successfully joined.
  • Remote Participant will receive a onParticipantJoined event with the newly joined Participant object from the event callback.

Participant having ask_join permission inside token

  • If a token contains the permission ask_join, then the participant will not join the meeting directly after calling join(), but an event will be emitted to the participant having the permission allow_join called onEntryRequested.

  • After the decision from the remote participant, an event will be emitted to participant called onEntryResponded. This event will contain the decision made by the remote participant.

Participant having allow_join permission inside token

  • If a token containing the permission allow_join, then the participant will join the meeting derectly after calling join().

leave()

  • leave() is used to leave a meeting for local participant only.

Events associated with leave():


end()

  • end() is used to end a meeting for all participants.

Events associated with end():


unmuteMic()

  • unmuteMic() is used to enable mic of the local participant.

Events associated with unmuteMic():


muteMic()

  • muteMic() is used to disable mic of the local participant.

Events associated with muteMic():


toggleMic()

  • toggleMic() is used to toogle mic of the local participant.

Events associated with toggleMic():


enableWebcam()

  • enableWebcam() is used to enable webcam of the local participant.

Events associated with enableWebcam():


disableWebcam()

  • disableWebcam() is used to disable webcam of the local participant.

Events associated with disableWebcam():


toggleWebcam()

  • toggleWebcam() is used to toogle webcam of the local participant.

Events associated with toggleWebcam():


enableScreenShare()

  • enableScreenShare() is used to enable screen share of the local participant.

Events associated with enableScreenShare():


disableScreenShare()

  • disableScreenShare() is used to disable screen share of the local participant.

Events associated with disableScreenShare():


toggleScreenShare()

  • toggleScreenShare() is used to toogle screen share of the local participant.

Events associated with toggleScreenShare():


startRecording()

  • startRecording is used to start meeting recording.

  • webhookUrl will be triggered when the recording is completed and stored into server. Read more about webhooks here.

  • awsDirPath will be the path for the your S3 bucket where you want to store recordings to. To allow us to store recording in your S3 bucket, you will need to fill this form by providing the required values. VideoSDK AWS S3 Integration

  • config: mode is used to either record video-and-audio both or only audio. And by default it will be video-and-audio.

  • config: quality is only applicable to video-and-audio.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • webhookUrl: String
  • awsDirPath: String
  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 4
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
    • orientation: "landscape" | "portrait"
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Events associated with startRecording():

Example

const webhookUrl = "https://webhook.your-api-server.com";

const awsDirPath = "/meeting-recordings/";

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};

const { startRecording } = useMeeting();

startRecording(webhookUrl, awsDirPath, config, transcription);

stopRecording()

  • stopRecording() is used to stop the meeting recording.

Events associated with stopRecording():


startLivestream()

  • startLivestream() is used to start meeting livestreaming.

  • You will be able to start live stream meetings to other platforms such as Youtube, Facebook, etc. that support RTMP streaming.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • outputs: Array<{ url: String, streamKey: String }>
  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • recording:
      • enabled: Boolean
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Events associated with startLiveStream():

Example

const outputs = [
{
url: "rtmp://a.rtmp.youtube.com/live2",
streamKey: "<STREAM_KEY>",
},
{
url: "rtmps://",
streamKey: "<STREAM_KEY>",
},
];

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
recording = {
enabled: true,
},
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};

const { startLivestream } = useMeeting();

startLivestream(outputs, config, transcription);

stopLivestream()

  • stopLivestream() is used to stop the live streaming to social media.

Events associated with stopLivestream():


startHls()

  • startHls() will start HLS streaming of your meeting.

  • You will be able to start HLS and watch the live stream of meeting over HLS.

  • mode is used to either start hls streaming of video-and-audio both or only audio. And by default it will be video-and-audio.

  • quality is only applicable to video-and-audio.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
    • recording:
      • enabled: Boolean
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Events associated with startHls():

Example

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
recording = {
enabled: true,
},
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};


const { startHls } = useMeeting();

startHls(config, transcription);

stopHls()

  • stopHls() is used to stop the HLS streaming.

Events associated with stopHls():


getMics()

  • getMics() will return all mic devices connected.

Returns

  • Promise<{ deviceId: string; lable: string }[]>

Example

const { getMics } = useMeeting();

const handleGetMics = async () => {
const mics = await getMics();

console.log(mics);
};

handleGetMics();

getWebcams()

  • getWebcams() will return all camera devices connected.

Returns

  • Promise<{ deviceId: string; lable: string }[]>

Example

const { getWebcams } = useMeeting();

const handleGetWebcams = async () => {
const webcams = await getWebcams();

console.log(webcams);
};

handleGetWebcams();

changeMic()

  • It is used to change the mic device.
  • If multiple mic devices are connected, by using changeMic() one can change the mic device.

Parameters

  • deviceId: String

changeWebcam()

  • changeWebcam() is used to change the webcam device.
  • If multiple webcam devices are connected, by using changeWebcam() one can change the camera device.

Parameters

  • deviceId: String

startVideo()

  • startVideo() is used to start playing external video in th meeting.

Parameters

  • link: String

Events associated with startVideo():


stopVideo()

  • stopVideo() stops playing external video in th meeting.

Events associated with stopVideo():


pauseVideo()

  • pauseVideo() pauses playing the video at the provided time in the input parameter currentTime.

Parameters

  • currentTime : Number

Events associated with pauseVideo():


resumeVideo()

  • resumeVideo() resumes playing external video in th meeting.

Events associated with resumeVideo():


seekVideo()

  • seekVideo() seeks playing the video upto the provided time in the input parameter currentTime.

Parameters

  • currentTime : Number

Events associated with seekVideo():

  • onVideoSeeked() event will trigger to all the participant with currentTime

changeMode()

  • It is used to change the mode.
  • You can toggle between the CONFERENCE and VIEWERmode .
    • CONFERENCE: Both audio and video streams will be produced and consumed in this mode.
    • VIEWER: Audio and video streams will not be produced or consumed in this mode.

Parameters

  • mode: String

Returns

  • void

Got a Question? Ask us on discord