VideoSDK Class Methods - Flutter
createRoom()
- This method is provided by SDK to build instance of
VideoSDK Room
based on given configuration.
Parameters
-
roomId
- type:
String
REQUIRED
- Id of the Room to be created.
- type:
-
token
- type:
String
REQUIRED
- Sets AuthToken, which is used for authentication purposes.
- type:
-
displayName
- type:
String
REQUIRED
- Sets name of the LocalParticipant to be displayed.
- type:
-
notification
- type: [
NotificationInfo
] OPTIONAL
- Sets configuration for the notification, that will be shown, while screen-sharing.
- type: [
-
micEnabled
- type:
bool
OPTIONAL
- Whether
mic
of the participant will beon
, while joining the room. If it is set tofalse
, then mic of that participant will bedisabled
by default, but can beenabled
ordisabled
later. - Default value of
micEnabled
is true.
- type:
-
camEnabled
- type:
bool
OPTIONAL
- Whether
camera
of the participant will beon
, while joining the room. If it is set tofalse
, then camera of that participant will bedisabled
by default, but can beenabled
ordisabled
- Default value of
camEnabled
is true.
- type:
-
multiStream
- type:
bool
OPTIONAL
- Default value is
true
. - It will specify if the stream should send multiple resolution layers or single resolution.
- type:
-
participantId
- type:
String
OPTIONAL
- Unique Id of the participant. If not passed then SDK will create an Id by itself and will use that id.
- type:
-
maxResolution
- type:
String
OPTIONAL
- Sets the maximum upload resolution of that participant's camera video stream.
- type:
-
defaultCameraIndex
- type:
int
OPTIONAL
- Sets camera, which will be used by default, while joining the VideoSDK Room.
- Default value of
defaultCameraIndex
is0
.
- type:
-
customCameraVideoTrack
- type:
CustomTrack
OPTIONAL
- Set the initial custom video track using different encoding parameters, camera facing mode, and optimization mode.
- type:
-
customMicrophoneAudioTrack
- type:
CustomTrack
OPTIONAL
- Set the initial custom audio track using different encoding parameters and optimization mode.
- type:
-
mode
- type:
Mode
OPTIONAL
- Set the participant mode i.e.
CONFERENCE
orVIEWER
. - Default value is
CONFERENCE
.
- type:
-
metaData
- type:
Map<String,dynamic>
OPTIONAL
- If you want to provide additional details about a user joining a meeting, such as their profile image, you can pass that information in this parameter.
- type:
-
debugMode
- type:
Boolean
OPTIONAL
- If you want to enable users to view detailed error logs generated by our SDK directly on the VideoSDK's dashboard, set this parameter to true.
- type:
Returns
Example
CustomTrack? videoTrack = await VideoSDK.createCameraVideoTrack(
encoderConfig: CustomVideoTrackConfig.h1440p_w1920p,
multiStream: false,
);
//Creating Custom Audio Track
CustomTrack? audioTrack = await VideoSDK.createMicrophoneAudioTrack(
encoderConfig: CustomAudioTrackConfig.high_quality);
// Create VideoSDK Room
Room room = VideoSDK.createRoom(
roomId: "<ROOM-ID>",
token: "<TOKEN>",
displayName: "<DISPLAY-NAME>",
micEnabled: false,
camEnabled: false,
maxResolution: 'hd',
multiStream: false,
defaultCameraIndex: 1, // Front Camera
customCameraVideoTrack: videoTrack, // custom video track :: optional
customMicrophoneAudioTrack: audioTrack, // custom audio track :: optional
notification: const NotificationInfo(
title: "Video SDK",
message: "Video SDK is sharing screen in the room",
icon: "notification_share",
),
metaData: {},
debugMode: true
);
getDevices()
-
The
getDevices()
method returns a list of the currently available media input and output devices, such as microphones, cameras, headsets, and so forth. The method returns a list ofDeviceInfo
objects describing the devices. -
DeviceInfo
class has four properties :-
DeviceInfo.deviceId
- Returns a string that is an identifier for the represented device, persisted across sessions.
-
DeviceInfo.kind
- Returns an enumerated value that is either
videoinput
,audiooutput
oraudioinput
.
- Returns an enumerated value that is either
-
DeviceInfo.label
- Returns a string describing this device (for example
BLUETOOTH
).
- Returns a string describing this device (for example
-
DeviceInfo.groupId
- Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).
-
For iOS devices:
EARPIECE
is not supported wheneverWIRED_HEADSET
orBLUETOOTH
device is connected.WIRED_HEADSET
andBLUETOOTH
devices are not supported simultaneously. Priority is given to the most recently connected device.
Returns
Future<List<DeviceInfo>?>
Example
import 'package:videosdk/videosdk.dart';
void getDeviceList() async {
try {
List<DeviceInfo>? devices = await VideoSDK.getDevices();
} catch (ex) {
print("Error in getDevices ");
}
}
getVideoDevices()
-
The
getVideoDevices
method returns a list of currently available video devices. The method returns a list ofVideoDeviceInfo
objects describing the video devices. -
VideoDeviceInfo
class has four properties :-
VideoDeviceInfo.deviceId
- Returns a string that is an identifier for the represented device, persisted across sessions.
-
VideoDeviceInfo.kind
- Returns an enumerated value that is
videoinput
.
- Returns an enumerated value that is
-
VideoDeviceInfo.label
- Returns a string describing this device (for example
BLUETOOTH
).
- Returns a string describing this device (for example
-
VideoDeviceInfo.groupId
- Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).
-
Returns
Future<List<VideoDeviceInfo>?>
Example
import 'package:videosdk/videosdk.dart';
void getVideoDeviceList() async {
try {
List<VideoDeviceInfo>? videoDevices = await VideoSDK.getVideoDevices();
} catch (ex) {
print("Error in getVideoDevices");
}
}
getAudioDevices()
-
The
getAudioDevices
method returns a list of currently available audio devices. The method returns a list ofAudioDeviceInfo
objects describing the audio devices. -
AudioDeviceInfo
class has four properties :-
AudioDeviceInfo.deviceId
- Returns a string that is an identifier for the represented device, persisted across sessions.
-
AudioDeviceInfo.kind
- Returns an enumerated value that is
audioinput
oraudiooutput
.
- Returns an enumerated value that is
-
AudioDeviceInfo.label
- Returns a string describing this device (for example
BLUETOOTH
).
- Returns a string describing this device (for example
-
AudioDeviceInfo.groupId
- Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).
-
For iOS devices:
EARPIECE
is not supported wheneverWIRED_HEADSET
orBLUETOOTH
device is connected.WIRED_HEADSET
andBLUETOOTH
devices are not supported simultaneously. Priority is given to the most recently connected device.
Returns
Future<List<AudioDeviceInfo>?>
Example
import 'package:videosdk/videosdk.dart';
import 'dart:io';
List<AudioDeviceInfo> audioInputDevices = [];
List<AudioDeviceInfo> audioOutputDevices = [];
void getAudioDeviceList() async {
try {
// This function returns only `audiooutput` devices when executed on mobile platforms.
List<AudioDeviceInfo>? audioDevices = await VideoSDK.getAudioDevices();
for (AudioDeviceInfo device in audioDevices!) {
// For Mobile Applications
// Note: Changing the audioDevice using `switchAudioDevice()` will affect both the microphone and the speaker.
if (!kIsWeb) {
if (Platform.isAndroid || Platform.isIOS) {
audioOutputDevices.add(device);
}
} else {
// For Web and Desktop Applications
// The input and output devices must be filtered separately to switch the respective device.
if (device.kind == 'audioinput') {
audioInputDevices.add(device);
} else {
audioOutputDevices.add(device);
}
}
}
}
catch (ex) {
print("Error in getAudioDevices ");
}
}
requestPermissions()
- The
requestPermissions()
method prompts the user for permission to access camera and microphone devices. It returns aMap<String, bool>
object, where the keys in the map can be'audio'
for the microphone and'video'
for the camera. - To enable requesting of microphone and camera permissions on iOS devices, add the following to your
Podfile
:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
//Add this into your podfile
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'PERMISSION_CAMERA=1',
'PERMISSION_MICROPHONE=1',
]
end
end
end
Parameters
Permissions
- A
Permissions
specifying the specific kinds of media, that you want to request. Optional
- Allowed Values :
audio
,video
,audio_video
- Default :
audio_video
- A
Returns
Future<Map<String, bool>>
requestPermissions()
method is not supported on Desktop Applications and Firefox Browser.
Example
import 'package:videosdk/videosdk.dart';
void requestMediaPermissions() async {
try {
//By default both audio and video permissions will be requested.
Map<String, bool> reqPermissions = await VideoSDK.requestPermissions();
//For requesting just audio permission.
Map<String, bool> reqAudioPermissions = await VideoSDK.requestPermissions(Permissions.audio);
//For requesting just video permission.
Map<String, bool> reqVideoPermissions = await VideoSDK.requestPermissions(Permissions.video);
//For requesting both audio and video permissions.
Map<String, bool> reqAudioVideoPermissions = await VideoSDK.requestPermissions(Permissions.audio_video);
print("Request Permissions ${reqAudioVideoPermissions['audio']} ${reqAudioVideoPermissions['video']} ");
} catch (ex) {
print("Error in requestPermission ");
}
}
requestPermissions()
will throw an UnsupportedError when the Platform doesn't support permission request functionality.
checkPermissions()
- The
checkPermission()
method checks for permission to access camera and microphone devices. It returns aMap<String, bool>
object, where the keys in the map can be'audio'
for the microphone and'video'
for the camera. - To enable checking of microphone and camera permissions on iOS devices, add the following to your
Podfile
:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
//Add this into your podfile
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'PERMISSION_CAMERA=1',
'PERMISSION_MICROPHONE=1',
]
end
end
end
Parameters
Permissions
- A
Permissions
specifying the types of media to check. Optional
- Allowed Values :
audio
,video
,audio_video
- Default :
audio_video
- A
Returns
Future<Map<String, bool>>
checkPermissions()
method is not supported on Desktop Applications and Firefox Browser.
Example
import 'package:videosdk/videosdk.dart';
void checkMediaPermissions() async {
try {
//By default both audio and video permissions will be checked.
Map<String, bool> checkPermissions = await VideoSDK.checkPermissions();
//For checking just audio permission.
Map<String, bool> checkAudioPermissions = await VideoSDK.checkPermissions(Permissions.audio);
//For checking just video permission.
Map<String, bool> checkVideoPermissions = await VideoSDK.checkPermissions(Permissions.video);
//For checking both audio and video permissions.
Map<String, bool> checkAudioVideoPermissions = await VideoSDK.checkPermissions(Permissions.audio_video);
print("Check Permissions ${checkAudioVideoPermissions['audio']} ${checkAudioVideoPermissions['video']} ");
} catch (ex) {
print("Error in checkPermission ");
}
}
checkPermissions()
will throw an UnsupportedError when the Platform doesn't support permission check functionality.
checkBluetoothPermission()
- The
checkBluetoothPermission
method checks if the application has permission to access Bluetooth on the device. It returns a boolean value indicating whether Bluetooth permission is granted or not.
Returns
Future<bool>
checkBluetoothPermission()
method is only supported on Android devices running Android 12 or later.
Example
import 'package:videosdk/videosdk.dart';
void checkBluetooth() async {
try {
bool bluetoothPerm = await VideoSDK.checkBluetoothPermission();
} catch (ex) {
print("Error in checkBluetoothPermission ");
}
}
checkBluetoothPermission()
will throw an UnsupportedError when the Platform doesn't support bluetooth permission check functionality.
requestBluetoothPermission()
- The
requestBluetoothPermission()
method requests permission to access Bluetooth on the device. It returns a boolean value indicating whether the permission request was successful or not.
Returns
Future<bool>
requestBluetoothPermission()
method is only supported on Android devices running Android 12 or later.
Example
import 'package:videosdk/videosdk.dart';
void requestBluetooth() async {
try {
bool bluetoothPerm = await VideoSDK.requestBluetoothPermission();
} catch (ex) {
print("Error in requestBluetoothPermission ");
}
}
requestBluetoothPermission()
will throw an UnsupportedError when the Platform doesn't support bluetooth permission request functionality.
on()
- It is used to listen to
VideoSDK
related events and perform actions based on those events.
Parameters
-
event
- type:
Events
- This will specify the event to be listened. It defines which particular event from the VideoSDK class you are subscribing to.
- type:
-
eventHandler
- type:
Function
- This will be invoked whenever the specified event occurs. The function is executed with relevant data whenever the event is triggered.
- type:
Returns
void
Example
VideoSDK.on(Events.deviceChanged, () {
// do something
});
createCameraVideoTrack
- You can create a Video Track using
createCameraVideoTrack()
method ofVideoSDK
class. - This method can be used to create video track using different encoding parameters, camera facing mode, and optimization mode.
Parameters
-
cameraId:
- type:
String
- required:
false
- It will be the id of the camera from which the video should be captured.
- type:
-
encoderConfig:
- type:
CustomVideoTrackConfig
- required:
false
- default:
h360p_w640p
- Allowed values :
h90p_w160p
|h180p_w320p
|h216p_w384p
|h360p_w640p
|h540p_w960p
|h720p_w1280p
|h1080p_w1920p
|h1440p_w2560p
|h2160p_w3840p
|h120p_w160p
|h180p_w240p
|h240p_w320p
|h360p_w480p
|h480p_w640p
|h540p_w720p
|h720p_w960p
|h1080p_w1440p
|h1440p_w1920p
- It will be the encoderConfigs you can want to use for the Video Track.
- type:
Above mentioned encoder configurations are valid for both, landscape as well as portrait mode.
-
facingMode:
- type:
FacingMode
- required:
false
- Allowed values :
FacingMode.front
|FacingMode.environment
- It will specify whether to use front or back camera for the video track.
- type:
-
multiStream
- type:
boolean
- required:
false
- default: true
- It will specify if the stream should send multiple resolution layers or single resolution layer.
info- For meetings with fewer than or equal to four participants, setting
multiStream:false
is regarded as best practice. - This parameter is only available from
v1.0.9
.
- type:
Returns
Future<CustomTrack?>
Example
CustomTrack? videoTrack = await VideoSDK.createCameraVideoTrack(
encoderConfig: CustomVideoTrackConfig.h1440p_w1920p,
multiStream: false,
facingMode:"user",
);
createMicrophoneAudioTrack
- You can create a Audio Track using
createMicrophoneAudioTrack()
method ofVideoSDK
class. - This method can be used to create audio track using different encoding parameters and noise cancellation configuration.
Parameters
-
microphoneId:
- type:
String
- required:
false
- It will be the id of the mic from which the audio should be captured.
- type:
-
encoderConfig:
- type:
CustomTrackAudioConfig
- required:
false
- default:
speech_standard
- Allowed values :
speech_low_quality
|speech_standard
|music_standard
|standard_stereo
|high_quality
|high_quality_stereo
- It will be the encoder configuration you want to use for Audio Track.
- type:
-
noiseConfig
-
echoCancellation
- type:
boolean
- required:
false
- If
true
echo cancellation will turned on else it would be turned off.
- type:
-
autoGainControl
- type:
boolean
- required:
false
- If
true
auto gain will turned on else it would be turned off.
- type:
-
noiseSuppression
- type:
boolean
- required:
false
- If
true
noise suppression will turned on else it would be turned off.
- type:
-
Returns
Future<CustomTrack?>
Example
CustomTrack? audioTrack = await VideoSDK.createMicrophoneAudioTrack(
encoderConfig: CustomAudioTrackConfig.high_quality);
applyVideoProcessor
- This method enables the application of a video processor to incorporate effects into the video stream. It takes the name of a processor that was registered during initialization.
Parameters
-
videoProcessorName:
- type:
String
- required:
true
- The name of the processor whose effect you wish to apply.
- type:
Returns
void
Example
VideoSDK.applyVideoProcessor(videoProcessorName: "VirtualBGProcessor");
removeVideoProcessor
- This method removes a previously applied video processor from the video stream. This allows users to disable specific effects or revert to the original video state.
- The
removeVideoProcessor
method will remove the effect of the currently active video processor.
Returns
void
Example
VideoSDK.removeVideoProcessor();
Got a Question? Ask us on discord