Precall Setup - iOS
Picture this: before diving into the depths of a video call, imagine giving your setup a quick check-up, like a tech-savvy doctor ensuring all systems are a go. That's essentially what a precall experience does- it’s like your extensive debug session before the main code execution—a crucial step in ensuring your app's performance is top-notch.
Why is it necessary?​
Why invest time and effort into crafting a precall experience, you wonder? Well, picture this scenario: your users eagerly join a video call, only to encounter a myriad of technical difficulties—muted microphones, pixelated cameras, and laggy connections. Not exactly the smooth user experience you had in mind, right?
By integrating a robust precall process into your app, developers become the unsung heroes, preemptively addressing potential pitfalls and ensuring that users step into their video calls with confidence.
Using PreCall Functions​
Check Permissions​
- Begin by ensuring that your application has the necessary permissions to access user devices such as cameras and microphones.
- Utilize the
getAudioPermissionStatus()
andgetVideoPermissionStatus()
methods of theVideoSDK
class to verify if permissions are granted.
- Swift
let audioPermission = VideoSDK.getAudioPermissionStatus(); // audio permission status
let videoPermission = VideoSDK.getVideoPermissionStatus(); // video permission status
Request Permissions (if necessary)​
-
If permission status is
notDetermined
, use thegetAudioPermission()
andgetVideoPermission()
methods of theVideoSDK
class to prompt users to grant access to their devices. -
Calling the above functions will automatically handle the notDetermined state and prompt the user to grant permission.
- Swift
class StartViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// UI Setup
setupUI()
// prompt user to grant permission when the app starts
VideoSDK.getVideoPermission()
VideoSDK.getAudioPermission()
}
func setupUI() {
// your UI code
}
}
- If the permissions are
denied
orrestricted
after approving, you need to prompt user to manually approve the permissions from the Settings using thehandlePermission()
function of theAuthorisationDelegate
. Below is an example on how you can do it.
- Swift
class StartViewController: UIViewController {
// settings button
@IBOutlet weak var openSetting: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
// UI Setup
setupUI()
// prompt user to grant permission when the app starts
VideoSDK.getVideoPermission()
VideoSDK.getAudioPermission()
}
func setupUI() {
// you UI code
...
settingsButton()
}
//sets the visibility of settings button
func settingsButton() {
let audioStatus = VideoSDK.getAudioPermissionStatus()
let videoStatus = VideoSDK.getVideoPermissionStatus()
if (audioStatus == .authorised && videoStatus == .authorised) {
openSetting.isHidden = true
}
else {
openSetting.isHidden = false
}
}
// settings button tapped
@IBAction func openSettingsButtonTapped(_ sender: Any) {
handlePermission()
}
}
// implement Authorisation Delegate
extension StartViewController: AuthorisationDelegate {
func handlePermission() {
guard let settingsURL = URL(string: UIApplication.openSettingsURLString) else { return }
if UIApplication.shared.canOpenURL(settingsURL) {
UIApplication.shared.open(settingsURL, options: [:], completionHandler: nil)
}
}
}
Get Device Lists​
- Once you have the necessary permissions, Fetch the lists of available camera, audio device using the
getCameras()
andgetAudioDevices()
methods of theVideoSDK
class respectively.
- Swift
let cameras = VideoSDK.getCameras();
let audioDevices = VideoSDK.getAudioDevices();
Create Media Tracks​
- Create media tracks for the selected microphone and camera using the
createMicrophoneAudioTrack()
andcreateCameraVideoTrack()
methods. - Ensure that these tracks originate from the user-selected devices for accurate testing.
- Swift
import WebRTC
// Create custom Video Track
guard let videoMediaTrack = try? VideoSDK.createCameraVideoTrack(
// This will accept the enum value of CustomVideoTrackConfig which contains resolution (height x width) of video you want to capture.
encoderConfig: .h720p_w1280p, // .h540p_w960p | .h720p_w1280p ... // Default : .h360p_w640p
// It will specify whether to use front or back camera for the video track.
facingMode: .front, // .back, Default : .front
// We will discuss this parameter in next step.
multiStream:true // false, Default : true
// A video codec for compresses video data for efficient transmission over the internet, balancing quality and bandwidth usage.
codec: .H264 // Default: .VP8
) else { return }
// Create custom Audio Track
guard let audioMediaTrack = try? VideoSDK.createAudioTrack(
noiseConfig: noiseConfig(
noiseSupression: true,
echoCancellation: true,
autoGainControl: true,
highPassFilter: true
)
)
else { return }
Passing States to Meeting​
- Ensure that all relevant states, such as microphone and camera status (on/off) are passed into the meeting from the precall screen.
- This can be accomplished by passing these crucial states and media streams onto the VideoSDK's
initMeeting
. - By ensuring this integration, users can seamlessly transition from the precall setup to the actual meeting while preserving their preferred settings.
- Swift
let meeting = VideoSDK.initMeeting(
meetingId: "abcd-efgh-xyzw",
participantId: "JD", // optional
participantName: "John Doe",
micEnabled: micStatus,
webcamEnabled: webcamStatus
customCameraVideoStream: videoMediaTrack, //optional
customAudioStream: customAudioStream, //optional
mode: .CONFERENCE //optional (default mode is conference)
);
You can explore the complete implementation of the Precall functions in the official iOS SDK example available here.
API Reference​
The API references for all the methods utilized in this guide are provided below.
Got a Question? Ask us on discord