Skip to main content
Version: 1.2.x

VideoSDK Class Methods - Flutter

createRoom()

  • This method is provided by SDK to build instance of VideoSDK Room based on given configuration.

Parameters

  • roomId

    • type: String
    • REQUIRED
    • Id of the Room to be created.
  • token

    • type: String
    • REQUIRED
    • Sets AuthToken, which is used for authentication purposes.
  • displayName

    • type: String
    • REQUIRED
    • Sets name of the LocalParticipant to be displayed.
  • notification

    • type: [NotificationInfo]
    • OPTIONAL
    • Sets configuration for the notification, that will be shown, while screen-sharing.
  • micEnabled

    • type: bool
    • OPTIONAL
    • Whether mic of the participant will be on, while joining the room. If it is set to false, then mic of that participant will be disabled by default, but can be enabled or disabled later.
    • Default value of micEnabled is true.
  • camEnabled

    • type: bool
    • OPTIONAL
    • Whether camera of the participant will be on, while joining the room. If it is set to false, then camera of that participant will be disabled by default, but can be enabled or disabled
    • Default value of camEnabled is true.
  • multiStream

    • type: bool
    • OPTIONAL
    • Default value is true.
    • It will specify if the stream should send multiple resolution layers or single resolution.
  • participantId

    • type: String
    • OPTIONAL
    • Unique Id of the participant. If not passed then SDK will create an Id by itself and will use that id.
  • maxResolution

    • type: String
    • OPTIONAL
    • Sets the maximum upload resolution of that participant's camera video stream.
  • defaultCameraIndex

    • type: int
    • OPTIONAL
    • Sets camera, which will be used by default, while joining the VideoSDK Room.
    • Default value of defaultCameraIndex is 0.
  • customCameraVideoTrack

    • type: CustomTrack
    • OPTIONAL
    • Set the initial custom video track using different encoding parameters, camera facing mode, and optimization mode.
  • customMicrophoneAudioTrack

    • type: CustomTrack
    • OPTIONAL
    • Set the initial custom audio track using different encoding parameters and optimization mode.
  • mode

    • type: Mode
    • OPTIONAL
    • Set the participant mode i.e. CONFERENCE or VIEWER.
    • Default value is CONFERENCE.
  • metaData

    • type: Map<String,dynamic>
    • OPTIONAL
    • If you want to provide additional details about a user joining a meeting, such as their profile image, you can pass that information in this parameter.
  • debugMode

    • type: Boolean
    • OPTIONAL
    • If you want to enable users to view detailed error logs generated by our SDK directly on the VideoSDK's dashboard, set this parameter to true.

Returns

Example


CustomTrack? videoTrack = await VideoSDK.createCameraVideoTrack(
encoderConfig: CustomVideoTrackConfig.h1440p_w1920p,
multiStream: false,
);

//Creating Custom Audio Track
CustomTrack? audioTrack = await VideoSDK.createMicrophoneAudioTrack(
encoderConfig: CustomAudioTrackConfig.high_quality);

// Create VideoSDK Room
Room room = VideoSDK.createRoom(
roomId: "<ROOM-ID>",
token: "<TOKEN>",
displayName: "<DISPLAY-NAME>",
micEnabled: false,
camEnabled: false,
maxResolution: 'hd',
multiStream: false,
defaultCameraIndex: 1, // Front Camera
customCameraVideoTrack: videoTrack, // custom video track :: optional
customMicrophoneAudioTrack: audioTrack, // custom audio track :: optional
notification: const NotificationInfo(
title: "Video SDK",
message: "Video SDK is sharing screen in the room",
icon: "notification_share",
),
metaData: {},
debugMode: true
);

getDevices()

  • The getDevices() method returns a list of the currently available media input and output devices, such as microphones, cameras, headsets, and so forth. The method returns a list of DeviceInfo objects describing the devices.

  • DeviceInfo class has four properties :

    1. DeviceInfo.deviceId

      • Returns a string that is an identifier for the represented device, persisted across sessions.
    2. DeviceInfo.kind

      • Returns an enumerated value that is either videoinput , audiooutput or audioinput.
    3. DeviceInfo.label

      • Returns a string describing this device (for example BLUETOOTH).
    4. DeviceInfo.groupId

      • Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).
note

For iOS devices:

  • EARPIECE is not supported whenever WIRED_HEADSET or BLUETOOTH device is connected.
  • WIRED_HEADSET and BLUETOOTH devices are not supported simultaneously. Priority is given to the most recently connected device.

Returns

  • Future<List<DeviceInfo>?>

Example

import 'package:videosdk/videosdk.dart';

void getDeviceList() async {
try {
List<DeviceInfo>? devices = await VideoSDK.getDevices();
} catch (ex) {
print("Error in getDevices ");
}
}

getVideoDevices()

  • The getVideoDevices method returns a list of currently available video devices. The method returns a list of VideoDeviceInfo objects describing the video devices.

  • VideoDeviceInfo class has four properties :

    1. VideoDeviceInfo.deviceId

      • Returns a string that is an identifier for the represented device, persisted across sessions.
    2. VideoDeviceInfo.kind

      • Returns an enumerated value that is videoinput.
    3. VideoDeviceInfo.label

      • Returns a string describing this device (for example BLUETOOTH).
    4. VideoDeviceInfo.groupId

      • Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).

Returns

  • Future<List<VideoDeviceInfo>?>

Example

import 'package:videosdk/videosdk.dart';

void getVideoDeviceList() async {
try {
List<VideoDeviceInfo>? videoDevices = await VideoSDK.getVideoDevices();
} catch (ex) {
print("Error in getVideoDevices");
}
}

getAudioDevices()

  • The getAudioDevices method returns a list of currently available audio devices. The method returns a list of AudioDeviceInfo objects describing the audio devices.

  • AudioDeviceInfo class has four properties :

    1. AudioDeviceInfo.deviceId

      • Returns a string that is an identifier for the represented device, persisted across sessions.
    2. AudioDeviceInfo.kind

      • Returns an enumerated value that is audioinput or audiooutput.
    3. AudioDeviceInfo.label

      • Returns a string describing this device (for example BLUETOOTH).
    4. AudioDeviceInfo.groupId

      • Returns a string describing this group in which the device belongs (Two devices have the same groupId if they belong to the same physical device; for example, a monitor with both a built-in camera and microphone).
note

For iOS devices:

  • EARPIECE is not supported whenever WIRED_HEADSET or BLUETOOTH device is connected.
  • WIRED_HEADSET and BLUETOOTH devices are not supported simultaneously. Priority is given to the most recently connected device.

Returns

  • Future<List<AudioDeviceInfo>?>

Example

import 'package:videosdk/videosdk.dart';
import 'dart:io';

List<AudioDeviceInfo> audioInputDevices = [];
List<AudioDeviceInfo> audioOutputDevices = [];

void getAudioDeviceList() async {
try {
// This function returns only `audiooutput` devices when executed on mobile platforms.
List<AudioDeviceInfo>? audioDevices = await VideoSDK.getAudioDevices();
for (AudioDeviceInfo device in audioDevices!) {
// For Mobile Applications
// Note: Changing the audioDevice using `switchAudioDevice()` will affect both the microphone and the speaker.
if (!kIsWeb) {
if (Platform.isAndroid || Platform.isIOS) {
audioOutputDevices.add(device);
}
} else {
// For Web and Desktop Applications
// The input and output devices must be filtered separately to switch the respective device.
if (device.kind == 'audioinput') {
audioInputDevices.add(device);
} else {
audioOutputDevices.add(device);
}
}
}
}
catch (ex) {
print("Error in getAudioDevices ");
}
}

requestPermissions()

  • The requestPermissions() method prompts the user for permission to access camera and microphone devices. It returns a Map<String, bool> object, where the keys in the map can be 'audio' for the microphone and 'video' for the camera.
  • To enable requesting of microphone and camera permissions on iOS devices, add the following to your Podfile:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
//Add this into your podfile
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'PERMISSION_CAMERA=1',
'PERMISSION_MICROPHONE=1',
]
end
end
end

Parameters

  • Permissions
    • A Permissions specifying the specific kinds of media, that you want to request.
    • Optional
    • Allowed Values : audio ,video,audio_video
    • Default : audio_video

Returns

  • Future<Map<String, bool>>
info
  • requestPermissions() method is not supported on Desktop Applications and Firefox Browser.

Example

import 'package:videosdk/videosdk.dart';

void requestMediaPermissions() async {
try {
//By default both audio and video permissions will be requested.
Map<String, bool> reqPermissions = await VideoSDK.requestPermissions();
//For requesting just audio permission.
Map<String, bool> reqAudioPermissions = await VideoSDK.requestPermissions(Permissions.audio);
//For requesting just video permission.
Map<String, bool> reqVideoPermissions = await VideoSDK.requestPermissions(Permissions.video);
//For requesting both audio and video permissions.
Map<String, bool> reqAudioVideoPermissions = await VideoSDK.requestPermissions(Permissions.audio_video);

print("Request Permissions ${reqAudioVideoPermissions['audio']} ${reqAudioVideoPermissions['video']} ");
} catch (ex) {
print("Error in requestPermission ");
}
}
tip

requestPermissions() will throw an UnsupportedError when the Platform doesn't support permission request functionality.


checkPermissions()

  • The checkPermission() method checks for permission to access camera and microphone devices. It returns a Map<String, bool> object, where the keys in the map can be 'audio' for the microphone and 'video' for the camera.
  • To enable checking of microphone and camera permissions on iOS devices, add the following to your Podfile:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
//Add this into your podfile
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'PERMISSION_CAMERA=1',
'PERMISSION_MICROPHONE=1',
]
end
end
end

Parameters

  • Permissions
    • A Permissions specifying the types of media to check.
    • Optional
    • Allowed Values : audio ,video,audio_video
    • Default : audio_video

Returns

  • Future<Map<String, bool>>
info
  • checkPermissions() method is not supported on Desktop Applications and Firefox Browser.

Example

import 'package:videosdk/videosdk.dart';

void checkMediaPermissions() async {
try {
//By default both audio and video permissions will be checked.
Map<String, bool> checkPermissions = await VideoSDK.checkPermissions();
//For checking just audio permission.
Map<String, bool> checkAudioPermissions = await VideoSDK.checkPermissions(Permissions.audio);
//For checking just video permission.
Map<String, bool> checkVideoPermissions = await VideoSDK.checkPermissions(Permissions.video);
//For checking both audio and video permissions.
Map<String, bool> checkAudioVideoPermissions = await VideoSDK.checkPermissions(Permissions.audio_video);

print("Check Permissions ${checkAudioVideoPermissions['audio']} ${checkAudioVideoPermissions['video']} ");
} catch (ex) {
print("Error in checkPermission ");
}
}
tip

checkPermissions() will throw an UnsupportedError when the Platform doesn't support permission check functionality.


checkBluetoothPermission()

  • The checkBluetoothPermission method checks if the application has permission to access Bluetooth on the device. It returns a boolean value indicating whether Bluetooth permission is granted or not.

Returns

  • Future<bool>
info
  • checkBluetoothPermission() method is only supported on Android devices running Android 12 or later.

Example

import 'package:videosdk/videosdk.dart';

void checkBluetooth() async {
try {
bool bluetoothPerm = await VideoSDK.checkBluetoothPermission();
} catch (ex) {
print("Error in checkBluetoothPermission ");
}
}
tip

checkBluetoothPermission() will throw an UnsupportedError when the Platform doesn't support bluetooth permission check functionality.


requestBluetoothPermission()

  • The requestBluetoothPermission() method requests permission to access Bluetooth on the device. It returns a boolean value indicating whether the permission request was successful or not.

Returns

  • Future<bool>
info
  • requestBluetoothPermission() method is only supported on Android devices running Android 12 or later.

Example

import 'package:videosdk/videosdk.dart';

void requestBluetooth() async {
try {
bool bluetoothPerm = await VideoSDK.requestBluetoothPermission();
} catch (ex) {
print("Error in requestBluetoothPermission ");
}
}
tip

requestBluetoothPermission() will throw an UnsupportedError when the Platform doesn't support bluetooth permission request functionality.


on()

  • It is used to listen to VideoSDK related events and perform actions based on those events.

Parameters

  • event

    • type: Events
    • This will specify the event to be listened. It defines which particular event from the VideoSDK class you are subscribing to.
  • eventHandler

    • type: Function
    • This will be invoked whenever the specified event occurs. The function is executed with relevant data whenever the event is triggered.

Returns

  • void

Example

VideoSDK.on(Events.deviceChanged, () {
// do something
});

createCameraVideoTrack

  • You can create a Video Track using createCameraVideoTrack() method of VideoSDK class.
  • This method can be used to create video track using different encoding parameters, camera facing mode, and optimization mode.

Parameters

  • cameraId:

    • type: String
    • required: false
    • It will be the id of the camera from which the video should be captured.
  • encoderConfig:

    • type: CustomVideoTrackConfig
    • required: false
    • default: h360p_w640p
    • Allowed values : h90p_w160p | h180p_w320p | h216p_w384p | h360p_w640p | h540p_w960p | h720p_w1280p | h1080p_w1920p | h1440p_w2560p | h2160p_w3840p | h120p_w160p | h180p_w240p | h240p_w320p | h360p_w480p | h480p_w640p | h540p_w720p | h720p_w960p | h1080p_w1440p | h1440p_w1920p
    • It will be the encoderConfigs you can want to use for the Video Track.
note

Above mentioned encoder configurations are valid for both, landscape as well as portrait mode.

  • facingMode:

    • type: FacingMode
    • required: false
    • Allowed values : FacingMode.front | FacingMode.environment
    • It will specify whether to use front or back camera for the video track.
  • multiStream

    • type: boolean
    • required: false
    • default: true
    • It will specify if the stream should send multiple resolution layers or single resolution layer.
    info
    • For meetings with fewer than or equal to four participants, setting multiStream:false is regarded as best practice.
    • This parameter is only available from v1.0.9.

Returns

  • Future<CustomTrack?>

Example

CustomTrack? videoTrack = await VideoSDK.createCameraVideoTrack(
encoderConfig: CustomVideoTrackConfig.h1440p_w1920p,
multiStream: false,
facingMode:"user",
);

createMicrophoneAudioTrack

  • You can create a Audio Track using createMicrophoneAudioTrack() method of VideoSDK class.
  • This method can be used to create audio track using different encoding parameters and noise cancellation configuration.

Parameters

  • microphoneId:

    • type: String
    • required: false
    • It will be the id of the mic from which the audio should be captured.
  • encoderConfig:

    • type: CustomTrackAudioConfig
    • required: false
    • default: speech_standard
    • Allowed values : speech_low_quality | speech_standard | music_standard | standard_stereo | high_quality | high_quality_stereo
    • It will be the encoder configuration you want to use for Audio Track.
  • noiseConfig

    • echoCancellation

      • type: boolean
      • required: false
      • If true echo cancellation will turned on else it would be turned off.
    • autoGainControl

      • type: boolean
      • required: false
      • If true auto gain will turned on else it would be turned off.
    • noiseSuppression

      • type: boolean
      • required: false
      • If true noise suppression will turned on else it would be turned off.

Returns

  • Future<CustomTrack?>

Example

CustomTrack? audioTrack = await VideoSDK.createMicrophoneAudioTrack(
encoderConfig: CustomAudioTrackConfig.high_quality);

applyVideoProcessor

  • This method enables the application of a video processor to incorporate effects into the video stream. It takes the name of a processor that was registered during initialization.
info

Only processors that have been registered during app initialization are available for use during the meeting. To learn how to register your own cutom video processor, check out this guide, for Android and for iOS

Parameters

  • videoProcessorName:

    • type: String
    • required: true
    • The name of the processor whose effect you wish to apply.

Returns

  • void

Example

VideoSDK.applyVideoProcessor(videoProcessorName: "VirtualBGProcessor");

removeVideoProcessor

  • This method removes a previously applied video processor from the video stream. This allows users to disable specific effects or revert to the original video state.
  • The removeVideoProcessor method will remove the effect of the currently active video processor.

Returns

  • void

Example

VideoSDK.removeVideoProcessor();

Got a Question? Ask us on discord