Meeting Class Methods - Android
join()
- It is used to join a meeting.
- After meeting initialization by
initMeeting()
it returns a new instance of Meeting. However by default, it will not automatically join the meeting. Hence, to join the meeting you should calljoin()
.
Events associated with join()
:
- Local Participant will receive a
onMeetingJoined
event, when successfully joined. - Remote Participant will receive a
onParticipantJoined
event with the newly joinedParticipant
object from the event callback.
Participant having ask_join
permission inside token
-
If a token contains the permission
ask_join
, then the participant will not join the meeting directly after callingjoin()
, but an event will be emitted to the participant having the permissionallow_join
calledonEntryRequested
. -
After the decision from the remote participant, an event will be emitted to participant called
onEntryResponded
. This event will contain the decision made by the remote participant.
Participant having allow_join
permission inside token
- If a token containing the permission
allow_join
, then the participant will join the meeting directly after callingjoin()
.
Returns
void
leave()
- It is used to leave the current meeting.
Events associated with leave()
:
- Local participant will receive a
onMeetingLeft
event. - All remote participants will receive a
onParticipantLeft
event withparticipantId
.
Returns
void
end()
- It is used to end the current running session.
- By calling
end()
, all joined participants including localParticipant of that session will leave the meeting.
Events associated with end()
:
- All participants and localParticipant, will be emitted
onMeetingLeft
event.
Returns
void
enableWebcam()
- It is used to enable self camera.
onStreamEnabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback.
Returns
void
disableWebcam()
- It is used to disable self camera.
onStreamDisabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback.
Returns
void
unmuteMic()
- It is used to enable self microphone.
onStreamEnabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback.
Returns
void
muteMic()
- It is used to disable self microphone.
onStreamDisabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback.
Returns
void
enableScreenShare()
-
it is used to enable screen-sharing.
-
onStreamEnabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback. -
onPresenterChanged()
event will be trigget to all participant withparticipantId
.
Parameters
- data: Intent
Returns
void
disableScreenShare()
-
It is used to disable screen-sharing.
-
onStreamDisabled
event ofParticipantEventListener
will be emitted withstream
object from the event callback. -
onPresenterChanged()
event will be trigget to all participant withnull
.
Returns
void
uploadBase64File()
-
It is used to upload your file to Videosdk's Temporary storage.
-
base64Data
convert your file to base64 and pass here. -
token
pass your videosdk token. Read more about token here -
fileName
provide your fileName with extension. -
TaskCompletionListener
will handle the result of the upload operation.-
When the upload is complete, the
onComplete()
method ofTaskCompletionListener
will provide the correspondingfileUrl
, which can be used to retrieve the uploaded file. -
If an error occurs during the upload process, the
onError()
method ofTaskCompletionListener
will provide the error details.
-
Parameters
- base64Data: String
- token: String
- fileName: String
- listener: TaskCompletionListener
Returns
void
Example
- Kotlin
- Java
private fun uploadFile() {
val base64Data = "<Your File's base64>" // Convert your file to base64 and pass here
val token = "<VIDEOSDK_TOKEN>"
val fileName = "myImage.jpeg" // Provide name with extension here
meeting!!.uploadBase64File(
base64Data,
token,
fileName,
object : TaskCompletionListener<String?, String?> {
override fun onComplete(data: String?) {
Log.d("VideoSDK", "Uploaded file url: $data")
}
override fun onError(error: String?) {
Log.d("VideoSDK", "Error in upload file: $error")
}
}
)
}
private void uploadFile() {
String base64Data = "<Your File's base64>"; // Convert your file to base64 and pass here
String token = "<VIDEOSDK_TOKEN>";
String fileName = "myImage.jpeg"; // Provide name with extension here
meeting.uploadBase64File(base64Data, token, fileName, new TaskCompletionListener<String, String>() {
@Override
public void onComplete(@Nullable String data) {
Log.d("VideoSDK", "Uploaded file url: " + data);
}
@Override
public void onError(@Nullable String error) {
Log.d("VideoSDK", "Error in upload file: " + error);
}
});
}
fetchBase64File()
-
It is used to retrieve your file from the Videosdk's Temporary storage.
-
url
pass fileUrl which is returned byuploadBase64File()
-
token
pass your videosdk token. Read more about token here -
TaskCompletionListener
will handle the result of the fetch operation.-
When the fetch operation is complete, the
onComplete()
method ofTaskCompletionListener
will provide the file inbase64
format. -
If an error occurs during the fetch operation, the
onError()
method ofTaskCompletionListener
will provide the error details.
-
Parameters
- url: String
- token: String
- listener: TaskCompletionListener
Returns
void
Example
- Kotlin
- Java
private fun fetchFile() {
val url = "<Your FileUrl>" // Provide fileUrl which is returned by uploadBase64File()
val token = "<VIDEOSDK_TOKEN>"
meeting.fetchBase64File(url, token, object : TaskCompletionListener<String?, String?> {
override fun onComplete(data: String?) {
Log.d("VideoSDK", "Fetched file in base64:$data")
}
override fun onError(error: String?) {
Log.d("VideoSDK", "Error in fetch file: $error")
}
})
}
private void fetchFile() {
String url = "<Your FileUrl>"; // Provide fileUrl which is returned by uploadBase64File()
String token = "<VIDEOSDK_TOKEN>";
meeting.fetchBase64File(url, token, new TaskCompletionListener<String, String>() {
@Override
public void onComplete(@Nullable String data) {
Log.d("VideoSDK", "Fetched file in base64:" + data);
}
@Override
public void onError(@Nullable String error) {
Log.d("VideoSDK", "Error in fetch file: " + error);
}
});
}
startRecording()
-
startRecording
is used to start meeting recording. -
webhookUrl
will be triggered when the recording is completed and stored into server. Read more about webhooks here. -
awsDirPath
will be the path for the your S3 bucket where you want to store recordings to. To allow us to store recording in your S3 bucket, you will need to fill this form by providing the required values. VideoSDK AWS S3 Integration -
config: mode
is used to either record video-and-audio both or only audio. And by default it will be video-and-audio. -
config: quality
is only applicable to video-and-audio. -
transcription
This parameter lets you start post transcription for the recording.
Parameters
-
webhookUrl: String
-
awsDirPath: String
-
config:
- layout:
- type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
- priority: "SPEAKER" | "PIN"
- gridSize: Number
max 4
- theme: "DARK" | "LIGHT" | "DEFAULT"
- mode: "video-and-audio" | "audio"
- quality: "low" | "med" | "high"
- orientation: "landscape" | "portrait"
- layout:
-
transcription: PostTranscriptionConfig
- PostTranscriptionConfig.enabled: boolean
- PostTranscriptionConfig.modelId: String
- PostTranscriptionConfig.summary: SummaryConfig
- SummaryConfig.enabled: boolean
- SummaryConfig.prompt: String
Returns
void
Events associated with startRecording()
:
- Every participant will receive a callback on
onRecordingStateChanged()
Example
- Kotlin
- Java
val webhookUrl = "https://webhook.your-api-server.com"
var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "theme", "DARK")
val prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
val summaryConfig = SummaryConfig(true, prompt)
val modelId = "raman_v1"
val transcription = PostTranscriptionConfig(true, summaryConfig, modelId)
meeting!!.startRecording(webhookUrl,null,config,transcription)
String webhookUrl = "https://webhook.your-api-server.com";
JSONObject config = new JSONObject();
JSONObject layout = new JSONObject();
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT");
JsonUtils.jsonPut(layout, "priority", "PIN");
JsonUtils.jsonPut(layout, "gridSize", 9);
JsonUtils.jsonPut(config, "layout", layout);
JsonUtils.jsonPut(config, "theme", "DARK");
String prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary";
SummaryConfig summaryConfig = new SummaryConfig(true, prompt);
String modelId = "raman_v1";
PostTranscriptionConfig transcription = new PostTranscriptionConfig(true, summaryConfig, modelId);
meeting.startRecording(webhookUrl,null,config,transcription);
stopRecording()
- It is used to stop meeting recording.
Returns
void
Events associated with stopRecording()
:
- Every participant will receive a callback on
onRecordingStateChanged()
Example
- Kotlin
- Java
meeting!!.stopRecording()
meeting.stopRecording();
startLivestream()
-
startLiveStream()
is used to start meeting livestreaming. -
You will be able to start live stream meetings to other platforms such as Youtube, Facebook, etc. that support
RTMP
streaming.
Parameters
- outputs:
List<LivestreamOutput>
- config:
- layout:
- type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
- priority: "SPEAKER" | "PIN"
- gridSize: Number
max 25
- theme: "DARK" | "LIGHT" | "DEFAULT"
- layout:
Returns
void
Events associated with startLiveStream()
:
- Every participant will receive a callback on
onLivestreamStateChanged()
Example
- Kotlin
- Java
val YOUTUBE_RTMP_URL = "rtmp://a.rtmp.youtube.com/live2"
val YOUTUBE_RTMP_STREAM_KEY = "<STREAM_KEY>"
val outputs: MutableList<LivestreamOutput> = ArrayList()
outputs.add(LivestreamOutput(YOUTUBE_RTMP_URL, YOUTUBE_RTMP_STREAM_KEY))
var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "theme", "DARK")
meeting!!.startLivestream(outputs,config)
final String YOUTUBE_RTMP_URL = "rtmp://a.rtmp.youtube.com/live2";
final String YOUTUBE_RTMP_STREAM_KEY = "<STREAM_KEY>";
List<LivestreamOutput> outputs = new ArrayList<>();
outputs.add(new LivestreamOutput(YOUTUBE_RTMP_URL, YOUTUBE_RTMP_STREAM_KEY));
JSONObject config = new JSONObject();
JSONObject layout = new JSONObject();
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT");
JsonUtils.jsonPut(layout, "priority", "PIN");
JsonUtils.jsonPut(layout, "gridSize", 9);
JsonUtils.jsonPut(config, "layout", layout);
JsonUtils.jsonPut(config, "theme", "DARK");
meeting.startLivestream(outputs,config);
stopLivestream()
- It is used to stop meeting livestreaming.
Returns
void
Events associated with stopLivestream()
:
- Every participant will receive a callback on
onLivestreamStateChanged()
Example
- Kotlin
- Java
meeting!!.stopLivestream()
meeting.stopLivestream();
startHls()
-
startHls()
will start HLS streaming of your meeting. -
You will be able to start HLS and watch the live stream of meeting over HLS.
-
mode
is used to either start hls streaming of video-and-audio both or only audio. And by default it will be video-and-audio. -
quality
is only applicable to video-and-audio. -
transcription
This parameter lets you start post transcription for the recording.
Parameters
-
config:
- layout:
- type: "GRID" | "SPOTLIGHT" | "SIDEBAR"
- priority: "SPEAKER" | "PIN"
- gridSize: Number
max 25
- theme: "DARK" | "LIGHT" | "DEFAULT"
- mode: "video-and-audio" | "audio"
- quality: "low" | "med" | "high"
- layout:
-
transcription: PostTranscriptionConfig
- PostTranscriptionConfig.enabled: boolean
- PostTranscriptionConfig.modelId: String
- PostTranscriptionConfig.summary: SummaryConfig
- SummaryConfig.enabled: boolean
- SummaryConfig.prompt: String
Returns
void
Events associated with startHls()
:
- Every participant will receive a callback on
onHlsStateChanged()
Example
- Kotlin
- Java
var config = JSONObject()
var layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
JsonUtils.jsonPut(layout, "priority", "PIN")
JsonUtils.jsonPut(layout, "gridSize", 9)
JsonUtils.jsonPut(config, "layout", layout)
JsonUtils.jsonPut(config, "orientation", "portrait")
JsonUtils.jsonPut(config, "theme", "DARK")
val prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
val summaryConfig = SummaryConfig(true, prompt)
val modelId = "raman_v1"
val transcription = PostTranscriptionConfig(true, summaryConfig,modelId)
meeting!!.startHls(config, transcription)
JSONObject config = new JSONObject();
JSONObject layout = new JSONObject();
JsonUtils.jsonPut(layout, "type", "SPOTLIGHT");
JsonUtils.jsonPut(layout, "priority", "PIN");
JsonUtils.jsonPut(layout, "gridSize", 9);
JsonUtils.jsonPut(config, "layout", layout);
JsonUtils.jsonPut(config, "orientation", "portrait");
JsonUtils.jsonPut(config, "theme", "DARK");
String prompt = "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary";
SummaryConfig summaryConfig = new SummaryConfig(true, prompt);
String modelId = "raman_v1";
PostTranscriptionConfig transcription = new PostTranscriptionConfig(true, summaryConfig,modelId);
meeting.startHls(config,transcription);
stopHls()
stopHls()
is used to stop the HLS streaming.
Returns
void
Events associated with stopHls()
:
- Every participant will receive a callback on
onHlsStateChanged()
Example
- Kotlin
- Java
meeting!!.stopHls()
meeting.stopHls();
startTranscription()
startTranscription()
It is used to start realtime transcription.
Parameters
config
-
type :
TranscriptionConfig
-
This specifies the configurations for realtime transcription. You can specify following properties.
TranscriptionConfig.webhookUrl
: Webhooks will be triggered when the state of realtime transcription is changed. Read more about webhooks hereTranscriptionConfig.summary
:SummaryConfig
enabled
: Indicates whether realtime transcription summary generation is enabled. Summary will be available after realtime transcription stopped. Default:false
prompt
: provides guidelines or instructions for generating a custom summary based on the realtime transcription content.
Returns
void
Events associated with startTranscription()
:
-
Every participant will receive a callback on
onTranscriptionStateChanged()
-
Every participant will receive a callback on
onTranscriptionText()
Example
- Kotlin
- Java
// Realtime Transcription Configuration
val webhookUrl = "https://www.example.com"
val summaryConfig = SummaryConfig(
true,
"Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
)
val transcriptionConfig = TranscriptionConfig(
webhookUrl,
summaryConfig
)
meeting!!.startTranscription(transcriptionConfig)
// Realtime Transcription Configuration
final String webhookUrl = "https://www.example.com";
SummaryConfig summaryConfig = new SummaryConfig(
true,
"Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary"
);
TranscriptionConfig transcriptionConfig = new TranscriptionConfig(
webhookUrl,
summaryConfig
);
meeting.startTranscription(transcriptionConfig);
stopTranscription()
stopTranscription()
It is used to stop realtime transcription.
Returns
void
Events associated with startTranscription()
:
- Every participant will receive a callback on
onTranscriptionStateChanged()
Example
- Kotlin
- Java
meeting!!.stopTranscription()
meeting.stopTranscription();
changeMode()
- It is used to change the mode.
- You can toggle between the
CONFERENCE
andVIEWER
mode .CONFERENCE
: Both audio and video streams will be produced and consumed in this mode.VIEWER
: Audio and video streams will not be produced or consumed in this mode.
Parameters
- mode: String
Returns
void
Events associated with changeMode()
:
- Every participant will receive a callback on
onParticipantModeChanged()
Example
- Kotlin
- Java
meeting!!.changeMode("VIEWER")
meeting.changeMode("VIEWER");
getMics()
- It will return all connected mic devices.
Returns
Set<AppRTCAudioManager.AudioDevice>
Example
- Kotlin
- Java
val mics = meeting!!.mics
var mic: String
for (i in mics.indices) {
mic = mics.toTypedArray()[i].toString()
Toast.makeText(this, "Mic : $mic", Toast.LENGTH_SHORT).show()
}
Set<AppRTCAudioManager.AudioDevice> mics = meeting.getMics();
String mic;
for (int i = 0; i < mics.size(); i++) {
mic=mics.toArray()[i].toString();
Toast.makeText(this, "Mic : " + mic, Toast.LENGTH_SHORT).show();
}
changeMic()
- It is used to change the mic device.
- If multiple mic devices are connected, by using
changeMic()
one can change the mic device.
Parameters
- device: AppRTCAudioManager.AudioDevice
Returns
void
Example
- Kotlin
- Java
meeting!!.changeMic(AppRTCAudioManager.AudioDevice.BLUETOOTH)
meeting.changeMic(AppRTCAudioManager.AudioDevice.BLUETOOTH);
changeWebcam()
- It is used to change the camera device.
- If multiple camera devices are connected, by using
changeWebcam()
, one can change the camera device with its respective device id. - You can get a list of connected video devices using
VideoSDK.getVideoDevices()
Parameters
-
deviceId:
-
The
deviceId
represents the unique identifier of the camera device you wish to switch to. If no deviceId is provided, the facing mode will toggle, from the back camera to the front camera if the back camera is currently in use, or from the front camera to the back camera if the front camera is currently in use.- type : String
OPTIONAL
-
Returns
void
Example
- Kotlin
- Java
meeting!!.changeWebcam()
meeting.changeWebcam();
setAudioDeviceChangeListener()
- When a Local participant changes the Mic,
AppRTCAudioManager.AudioManagerEvents()
is triggered which can be set by usingsetAudioDeviceChangeListener()
method.
Parameters
- audioManagerEvents: AppRTCAudioManager.AudioManagerEvents
Returns
void
Example
- Kotlin
- Java
meeting!!.setAudioDeviceChangeListener(object : AudioManagerEvents {
override fun onAudioDeviceChanged(
selectedAudioDevice: AppRTCAudioManager.AudioDevice,
availableAudioDevices: Set<AppRTCAudioManager.AudioDevice>
) {
when (selectedAudioDevice) {
AppRTCAudioManager.AudioDevice.BLUETOOTH ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: BLUETOOTH", Toast.LENGTH_SHORT).show()
AppRTCAudioManager.AudioDevice.WIRED_HEADSET ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: WIRED_HEADSET", Toast.LENGTH_SHORT).show()
AppRTCAudioManager.AudioDevice.SPEAKER_PHONE ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: SPEAKER_PHONE", Toast.LENGTH_SHORT).show()
AppRTCAudioManager.AudioDevice.EARPIECE ->
Toast.makeText(this@MainActivity, "Selected AudioDevice: EARPIECE", Toast.LENGTH_SHORT).show()
}
}
})
meeting.setAudioDeviceChangeListener(new AppRTCAudioManager.AudioManagerEvents() {
@Override
public void onAudioDeviceChanged(AppRTCAudioManager.AudioDevice selectedAudioDevice, Set<AppRTCAudioManager.AudioDevice> availableAudioDevices) {
switch (selectedAudioDevice) {
case BLUETOOTH:
Toast.makeText(MainActivity.this, "Selected AudioDevice: BLUETOOTH", Toast.LENGTH_SHORT).show();
break;
case WIRED_HEADSET:
Toast.makeText(MainActivity.this, "Selected AudioDevice: WIRED_HEADSET", Toast.LENGTH_SHORT).show();
break;
case SPEAKER_PHONE:
Toast.makeText(MainActivity.this, "Selected AudioDevice: SPEAKER_PHONE", Toast.LENGTH_SHORT).show();
break;
case EARPIECE:
Toast.makeText(MainActivity.this, "Selected AudioDevice: EARPIECE", Toast.LENGTH_SHORT).show();
break;
}
}
});
addEventListener()
Parameters
- listener: MeetingEventListener
Returns
void
removeEventListener()
Parameters
- listener: MeetingEventListener
Returns
void
removeAllListeners()
Returns
void
Got a Question? Ask us on discord