Skip to main content
Version: 0.0.x

Meeting Class Methods - Javascript

join()

  • It is used to join a meeting.
  • After meeting initialization by initMeeting() it returns a new instance of Meeting. However, by default, it will not automatically join the meeting. Hence, to join the meeting you should call join().

Events associated with join():

Participant having ask_join permission inside token

  • If a token contains the permission ask_join, then the participant will not join the meeting directly after calling join(), but an event will be emitted to the participant having the permission allow_join called entry-requested.

  • After the decision from the remote participant, an event will be emitted to the participant called entry-responded. This event will contain the decision made by the remote participant.

Participant having allow_join permission inside token

  • If a token containing the permission allow_join, then the participant will join the meeting derectly after calling join().

Returns

  • void

leave()

  • It is used to leave the current meeting.

Events associated with leave():

Returns

  • void

end()

  • It is used to end the current running session.
  • By calling end(), all joined participants including localParticipant of that session will leave the meeting.

Events associated with end():

Returns

  • void

enableWebcam()

Returns

  • void

disableWebcam()

Returns

  • void

unmuteMic()

Returns

  • void

muteMic()

Returns

  • void

enableScreenShare()

Returns

  • void

disableScreenShare()

Returns

  • void

uploadBase64File()

  • It is used to upload your file to Videosdk's Temporary storage

  • base64Data convert your file to base64 and pass here.

  • token pass your videosdk token. Read more about token here

  • fileName provide your fileName with extension

  • The method will return the corresponding fileUrl, which will use to retrieve the file from the VideoSDK's storage system.

Parameters

  • base64Data: String
  • token: String
  • fileName: String

Returns

  • fileUrl - It will use to retrieve the file from the VideoSDK's storage system.

Example

const fileUrl = await meeting.uploadBase64File({
base64Data: "<Your File's base64>", // Convert your file to base64 and pass here
token: "<VIDEOSDK_TOKEN>",
fileName: "myImage.jpeg", // Provide name with extension here,
});

console.log("fileUrl", fileUrl);

fetchBase64File()

  • It is used to retrieve your file from the Videosdk's Temporary storage

  • url pass fileUrl which is returned by uploadBase64File()

  • token pass your videosdk token. Read more about token here

  • The method will return image in form of base64 string.

Parameters

  • url: String
  • token: String

Returns

  • base64 - image in form of base64 string.

Example

let base64 = await meeting.fetchBase64File({
url: "<Your FileUrl>"; // Provide fileUrl which is returned by uploadBase64File(),
token :"<VIDEOSDK_TOKEN>",
});

console.log("base64", base64);

startRecording()

  • It is used to start meeting recording.

  • All participants and localParticipant, will receive recording-started event.

  • webhookUrl will be triggered when the recording is completed and stored in the server. Read more about webhooks here.

  • awsDirPath will be the path for your S3 bucket to which you want to store recordings. To allow us to store the recording in your S3 bucket, you will need to fill out this form by providing the required values. VideoSDK AWS S3 Integration

  • config: mode is used to either record video-and-audio both or only audio. And by default it will be video-and-audio.

  • config: quality is only applicable to video-and-audio.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • webhookUrl: String
  • awsDirPath: String
  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 4
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
    • orientation: "landscape" | "portrait"
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Returns

  • void

Example

const webhookUrl = "https://webhook.your-api-server.com";

const awsDirPath = "/meeting-recordings/";

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};

startRecording(webhookUrl, awsDirPath, config, transcription);

stopRecording()

Returns

  • void

startLivestream()

  • It is used to start meeting live streaming.

  • You will be able to start live stream meetings to other platforms such as Youtube, Facebook, etc. that support RTMP streaming.

  • All participants and localParticipant will receive the livestream-started event.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • outputs: Array<{ url: String, streamKey: String }>
  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • recording:
      • enabled: Boolean
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Returns

  • void

Example

const outputs = [
{
url: "rtmp://a.rtmp.youtube.com/live2",
streamKey: "<STREAM_KEY>",
},
{
url: "rtmps://",
streamKey: "<STREAM_KEY>",
},
];

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
recording = {
enabled: true,
};
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};

startLivestream(outputs, config, transcription);

stopLivestream()

Returns

  • void

startHls()

  • It is used to start meeting HLS.

  • You will be able to start HLS and watch the live stream of meeting over HLS.

  • All participants and localParticipant, will receive the hls-started event.

  • mode is used to either start hls streaming of video-and-audio both or only audio. And by default it will be video-and-audio.

  • quality is only applicable to video-and-audio.

  • transcription: enabled indicates whether post transcription is enabled.

  • transcription: summary: enabled Indicates whether post transcription summary generation is enabled. Only applicable when post transcription is enabled.

  • transcription: summary: prompt provides guidelines or instructions for generating a custom summary based on the post transcription content.

Parameters

  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
    • recording:
      • enabled: Boolean
  • transcription:
    • enabled: Boolean
    • summary:
      • enabled: Boolean
      • prompt: String

Returns

  • void

Example

const config = {
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 9,
},
theme: "DEFAULT",
recording = {
enabled: true,
};
};

const transcription = {
enabled: true,
summary: {
enabled: true,
},
};

startHls(config, transcription);

stopHls()

Returns

  • void

startTranscription()

  • It is used to start realtime transcription.

  • All participants and localParticipant, will receive TRANSCRIPTION_STARTING event state and transcription-text event.

  • config: webhookUrl will be triggered when the state of realtime transcription is changed. Read more about webhooks here.

  • config: summary: enabled Indicates whether realtime transcription summary generation is enabled. Summary will be available after realtime transcription stopped.

  • config: summary: prompt provides guidelines or instructions for generating a custom summary based on the realtime transcription content.

Parameters

  • config:
    • webhookUrl: String
    • summary:
      • enabled: Boolean
      • prompt: String

Returns

  • void

Example

const config = {
webhookUrl: "https://webhook.your-api-server.com",
summary: {
enabled: true,
prompt:
"Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary",
},
};

startTranscription(config);

stopTranscription()

Returns

  • void

Example

stopTranscription();

startWhiteboard()

Returns

  • url

Example

startWhiteboard();

stopWhiteboard()

Returns

  • void

Example

stopWhiteboard();

getWebcams()

  • It will return all camera devices connected.

Returns

  • Promise<{ deviceId: string; lable: string }[]>

Example

const handleGetWebcams = async () => {
const webcams = await getWebcams();

console.log(webcams);
};

handleGetWebcams();

changeWebcam()

  • It is used to change the webcam device.
  • If multiple webcam devices are connected, by using changeWebcam() one can change the camera device.

Parameters

  • deviceId: String

Returns

  • void

setWebcamQuality()

  • It is used to set the webcam quality.
  • By using setWebcamQuality(), uploading of the webcam stream quality of localParticipant can be changed from low to high or vice versa.

Parameters

  • quality: "low" | "med" | "high"

Returns

  • void

getMics()

  • It will return all mic devices connected.

Returns

  • Promise<{ deviceId: string; lable: string }[]>

Example

const handleGetMics = async () => {
const mics = await getMics();

console.log(mics);
};

handleGetMics();

changeMic()

  • It is used to change the mic device.
  • If multiple mic devices are connected, by using changeMic() one can change the mic device.

Parameters

  • deviceId: String

Returns

  • void

on()

Parameters

Returns

  • void

Example

//for meeting-any-event
meeting.on("meeting-any-event", listener);

off()

Parameters

Returns

  • void

Example

//for meeting-any-event
meeting.off("meeting-any-event", listener);

changeMode()

  • It is used to change the mode.
  • You can toggle between the CONFERENCE and VIEWERmode .
    • CONFERENCE: Both audio and video streams will be produced and consumed in this mode.
    • VIEWER: Audio and video streams will not be produced or consumed in this mode.

Parameters

  • mode: String

Returns

  • void

Got a Question? Ask us on discord