Skip to main content
Version: 0.1.x

Meeting Class Methods - Android

join()

  • It is used to join a meeting.
  • After meeting initialization by initMeeting() it returns a new instance of Meeting. However by default, it will not automatically join the meeting. Hence, to join the meeting you should call join().

Events associated with join():

Participant having ask_join permission inside token

  • If a token contains the permission ask_join, then the participant will not join the meeting directly after calling join(), but an event will be emitted to the participant having the permission allow_join called onEntryRequested.

  • After the decision from the remote participant, an event will be emitted to participant called onEntryResponded. This event will contain the decision made by the remote participant.

Participant having allow_join permission inside token

  • If a token containing the permission allow_join, then the participant will join the meeting directly after calling join().

Returns

  • void

leave()

  • It is used to leave the current meeting.

Events associated with leave():

Returns

  • void

end()

  • It is used to end the current running session.
  • By calling end(), all joined participants including localParticipant of that session will leave the meeting.

Events associated with end():

Returns

  • void

enableWebcam()

  • It is used to enable self camera.
  • onStreamEnabled event of ParticipantEventListener will be emitted with stream object from the event callback.

Returns

  • void

disableWebcam()

  • It is used to disable self camera.
  • onStreamDisabled event of ParticipantEventListener will be emitted with stream object from the event callback.

Returns

  • void

unmuteMic()

  • It is used to enable self microphone.
  • onStreamEnabled event of ParticipantEventListener will be emitted with stream object from the event callback.

Returns

  • void

muteMic()

  • It is used to disable self microphone.
  • onStreamDisabled event of ParticipantEventListener will be emitted with stream object from the event callback.

Returns

  • void

enableScreenShare()

  • it is used to enable screen-sharing.

  • onStreamEnabled event of ParticipantEventListener will be emitted with stream object from the event callback.

  • onPresenterChanged() event will be trigget to all participant with participantId.

Parameters

  • data: Intent

Returns

  • void

disableScreenShare()

  • It is used to disable screen-sharing.

  • onStreamDisabled event of ParticipantEventListener will be emitted with stream object from the event callback.

  • onPresenterChanged() event will be trigget to all participant with null.

Returns

  • void

uploadBase64File()

  • It is used to upload your file to Videosdk's Temporary storage.

  • base64Data convert your file to base64 and pass here.

  • token pass your videosdk token. Read more about token here

  • fileName provide your fileName with extension.

  • TaskCompletionListener will handle the result of the upload operation.

    • When the upload is complete, the onComplete() method of TaskCompletionListener will provide the corresponding fileUrl, which can be used to retrieve the uploaded file.

    • If an error occurs during the upload process, the onError() method of TaskCompletionListener will provide the error details.

Parameters

  • base64Data: String
  • token: String
  • fileName: String
  • listener: TaskCompletionListener

Returns

  • void

Example

private fun uploadFile() {
val base64Data = "<Your File's base64>" // Convert your file to base64 and pass here
val token = "<VIDEOSDK_TOKEN>"
val fileName = "myImage.jpeg" // Provide name with extension here
meeting!!.uploadBase64File(
base64Data,
token,
fileName,
object : TaskCompletionListener<String?, String?> {
override fun onComplete(data: String?) {
Log.d("VideoSDK", "Uploaded file url: $data")
}

override fun onError(error: String?) {
Log.d("VideoSDK", "Error in upload file: $error")
}
}
)
}

fetchBase64File()

  • It is used to retrieve your file from the Videosdk's Temporary storage.

  • url pass fileUrl which is returned by uploadBase64File()

  • token pass your videosdk token. Read more about token here

  • TaskCompletionListener will handle the result of the fetch operation.

    • When the fetch operation is complete, the onComplete() method of TaskCompletionListener will provide the file in base64 format.

    • If an error occurs during the fetch operation, the onError() method of TaskCompletionListener will provide the error details.

Parameters

  • url: String
  • token: String
  • listener: TaskCompletionListener

Returns

  • void

Example

private fun fetchFile() {
val url = "<Your FileUrl>" // Provide fileUrl which is returned by uploadBase64File()
val token = "<VIDEOSDK_TOKEN>"
meeting.fetchBase64File(url, token, object : TaskCompletionListener<String?, String?> {
override fun onComplete(data: String?) {
Log.d("VideoSDK", "Fetched file in base64:$data")
}

override fun onError(error: String?) {
Log.d("VideoSDK", "Error in fetch file: $error")
}
})
}

startRecording()

  • startRecording is used to start meeting recording.

  • webhookUrl will be triggered when the recording is completed and stored into server. Read more about webhooks here.

  • awsDirPath will be the path for the your S3 bucket where you want to store recordings to. To allow us to store recording in your S3 bucket, you will need to fill this form by providing the required values. VideoSDK AWS S3 Integration

  • config: mode is used to either record video-and-audio both or only audio. And by default it will be video-and-audio.

  • config: quality is only applicable to video-and-audio.

  • transcription This parameter lets you start post transcription for the recording.

Parameters

  • webhookUrl: String

  • awsDirPath: String

  • config:

    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 4
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
    • orientation: "landscape" | "portrait"
  • transcription: PostTranscriptionConfig

    • PostTranscriptionConfig.enabled: boolean
    • PostTranscriptionConfig.modelId: String
    • PostTranscriptionConfig.summary: SummaryConfig
      • SummaryConfig.enabled: boolean
      • SummaryConfig.prompt: String

Returns

  • void

Events associated with startRecording():

Example

val webhookUrl = "https://webhook.your-api-server.com"
var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "theme", "DARK")

val prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
val summaryConfig = SummaryConfig(true, prompt)
val modelId = "raman_v1"
val transcription = PostTranscriptionConfig(true, summaryConfig, modelId)

meeting!!.startRecording(webhookUrl,null,config,transcription)

stopRecording()

  • It is used to stop meeting recording.

Returns

  • void

Events associated with stopRecording():

Example

meeting!!.stopRecording()

startLivestream()

  • startLiveStream() is used to start meeting livestreaming.

  • You will be able to start live stream meetings to other platforms such as Youtube, Facebook, etc. that support RTMP streaming.

Parameters

  • outputs: List<LivestreamOutput>
  • config:
    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"

Returns

  • void

Events associated with startLiveStream():

Example

val YOUTUBE_RTMP_URL = "rtmp://a.rtmp.youtube.com/live2"
val YOUTUBE_RTMP_STREAM_KEY = "<STREAM_KEY>"

val outputs: MutableList<LivestreamOutput> = ArrayList()
outputs.add(LivestreamOutput(YOUTUBE_RTMP_URL, YOUTUBE_RTMP_STREAM_KEY))

var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "theme", "DARK")

meeting!!.startLivestream(outputs,config)

stopLivestream()

  • It is used to stop meeting livestreaming.

Returns

  • void

Events associated with stopLivestream():

Example

meeting!!.stopLivestream()

startHls()

  • startHls() will start HLS streaming of your meeting.

  • You will be able to start HLS and watch the live stream of meeting over HLS.

  • mode is used to either start hls streaming of video-and-audio both or only audio. And by default it will be video-and-audio.

  • quality is only applicable to video-and-audio.

  • transcription This parameter lets you start post transcription for the recording.

Parameters

  • config:

    • layout:
      • type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
      • priority: "SPEAKER" | "PIN"
      • gridSize: Number max 25
    • theme: "DARK" | "LIGHT" | "DEFAULT"
    • mode: "video-and-audio" | "audio"
    • quality: "low" | "med" | "high"
  • transcription: PostTranscriptionConfig

    • PostTranscriptionConfig.enabled: boolean
    • PostTranscriptionConfig.modelId: String
    • PostTranscriptionConfig.summary: SummaryConfig
      • SummaryConfig.enabled: boolean
      • SummaryConfig.prompt: String

Returns

  • void

Events associated with startHls():

Example

var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "orientation", "portrait")
JsonUtils.jsonPut(config, "theme", "DARK")

val prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
val summaryConfig = SummaryConfig(true, prompt)
val modelId = "raman_v1"
val transcription = PostTranscriptionConfig(true, summaryConfig,modelId)

meeting!!.startHls(config, transcription)

stopHls()

  • stopHls() is used to stop the HLS streaming.

Returns

  • void

Events associated with stopHls():

Example

meeting!!.stopHls()

startTranscription()

  • startTranscription() It is used to start realtime transcription.

Parameters

config

  • type : TranscriptionConfig

  • This specifies the configurations for realtime transcription. You can specify following properties.

    • TranscriptionConfig.webhookUrl: Webhooks will be triggered when the state of realtime transcription is changed. Read more about webhooks here
    • TranscriptionConfig.summary: SummaryConfig
      • enabled: Indicates whether realtime transcription summary generation is enabled. Summary will be available after realtime transcription stopped. Default: false
      • prompt: provides guidelines or instructions for generating a custom summary based on the realtime transcription content.

Returns

  • void

Events associated with startTranscription():

Example

// Realtime Transcription Configuration
val webhookUrl = "https://www.example.com"

val summaryConfig = SummaryConfig(
true,
"Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
)
val transcriptionConfig = TranscriptionConfig(
webhookUrl,
summaryConfig
)

meeting!!.startTranscription(transcriptionConfig)

stopTranscription()

  • stopTranscription() It is used to stop realtime transcription.

Returns

  • void

Events associated with startTranscription():

Example

meeting!!.stopTranscription()

changeMode()

  • It is used to change the mode.
  • You can toggle between the CONFERENCE and VIEWERmode .
    • CONFERENCE: Both audio and video streams will be produced and consumed in this mode.
    • VIEWER: Audio and video streams will not be produced or consumed in this mode.

Parameters

  • mode: String

Returns

  • void

Events associated with changeMode():

Example

meeting!!.changeMode("VIEWER")

getMics()

  • It will return all connected mic devices.

Returns

  • Set<AppRTCAudioManager.AudioDevice>

Example

val mics = meeting!!.mics
var mic: String
for (i in mics.indices) {
mic = mics.toTypedArray()[i].toString()
Toast.makeText(this, "Mic : $mic", Toast.LENGTH_SHORT).show()
}

changeMic()

  • It is used to change the mic device.
  • If multiple mic devices are connected, by using changeMic() one can change the mic device.

Parameters

  • device: AppRTCAudioManager.AudioDevice

Returns

  • void

Example

meeting!!.changeMic(AppRTCAudioManager.AudioDevice.BLUETOOTH)

changeWebcam()

  • It is used to change the camera device.
  • If multiple camera devices are connected, by using changeWebcam(), one can change the camera device with its respective device id.
  • You can get a list of connected video devices using VideoSDK.getVideoDevices()

Parameters

  • deviceId:

    • The deviceId represents the unique identifier of the camera device you wish to switch to. If no deviceId is provided, the facing mode will toggle, from the back camera to the front camera if the back camera is currently in use, or from the front camera to the back camera if the front camera is currently in use.

      • type : String
      • OPTIONAL

Returns

  • void

Example

meeting!!.changeWebcam()

setAudioDeviceChangeListener()

  • When a Local participant changes the Mic, AppRTCAudioManager.AudioManagerEvents() is triggered which can be set by using setAudioDeviceChangeListener() method.

Parameters

  • audioManagerEvents: AppRTCAudioManager.AudioManagerEvents

Returns

  • void

Example

    meeting!!.setAudioDeviceChangeListener(object : AudioManagerEvents {
override fun onAudioDeviceChanged(
selectedAudioDevice: AppRTCAudioManager.AudioDevice,
availableAudioDevices: Set<AppRTCAudioManager.AudioDevice>
) {
when (selectedAudioDevice) {
AppRTCAudioManager.AudioDevice.BLUETOOTH ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: BLUETOOTH", Toast.LENGTH_SHORT).show()

AppRTCAudioManager.AudioDevice.WIRED_HEADSET ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: WIRED_HEADSET", Toast.LENGTH_SHORT).show()

AppRTCAudioManager.AudioDevice.SPEAKER_PHONE ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: SPEAKER_PHONE", Toast.LENGTH_SHORT).show()

AppRTCAudioManager.AudioDevice.EARPIECE ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: EARPIECE", Toast.LENGTH_SHORT).show()
}
}
})

addEventListener()

Parameters

  • listener: MeetingEventListener

Returns

  • void

removeEventListener()

Parameters

  • listener: MeetingEventListener

Returns

  • void

removeAllListeners()

Returns

  • void

Got a Question? Ask us on discord