Skip to main content
Version: 0.0.x

Quick Start for Interactive Live Streaming in Javascript

VideoSDK enables you to embed the video calling feature into your Javascript application in minutes.

In this quickstart, we are going to explore interactive live streaming feature of Video SDK. We will go through step by step guide of integrating video calling with Javascript Video SDK

This guide will get you running with the VideoSDK video & audio calling in minutes.

Prerequisites​

Before proceeding, ensure that your development environment meets the following requirements:

  • Video SDK Developer Account (Not having one, follow Video SDK Dashboard)
  • Have Node and NPM installed on your device.
info

One should have a VideoSDK account to generate token. Visit VideoSDK dashboard to generate token

Getting Started with the Code!​

Follow the steps to create the environment necessary to add video calls into your app.Also you can find the code sample for quickstart here.

Install Video SDK​

You can import VideoSDK using <script> tag or you can install the VideoSDK using the below-mentioned npm command. Make sure you are in your app directory before you run this command.

<html>
<head>
<!--.....-->
</head>
<body>
<!--.....-->
<script src="https://sdk.videosdk.live/js-sdk/0.0.67/videosdk.js"></script>
</body>
</html>

Structure of the project​

Your project structure should look like this.

Project Structure
  root
├── index.html
├── config.js
├── index.js

We are going to work on two files:

  • index.html: Responsible to create basic UI.
  • config.js: Responsible to store token.
  • index.js: Responsible to render meeting view and join the meeting.

Step 1 : Create UI​

In this step, we are going to create HTML file which will have two screens join-screen and grid-screen.

index.html
<!DOCTYPE html>
<html>
<head> </head>

<body>
<div id="join-screen">
<!-- Create new Meeting Button -->
<button id="createMeetingBtn">Create Meeting</button>
OR
<!-- Join existing Meeting -->
<input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" />
<button id="joinHostBtn">Join As Host</button>
<button id="joinViewerBtn">Join As Viewer</button>
</div>

<!-- for Managing meeting status -->
<div id="textDiv"></div>

<div id="grid-screen" style="display: none">
<!-- To Display MeetingId -->
<h3 id="meetingIdHeading"></h3>
<h3 id="hlsStatusHeading"></h3>

<div id="speakerView" style="display: none">
<!-- Controllers -->
<button id="leaveBtn">Leave</button>
<button id="toggleMicBtn">Toggle Mic</button>
<button id="toggleWebCamBtn">Toggle WebCam</button>
<button id="startHlsBtn">Start HLS</button>
<button id="stopHlsBtn">Stop HLS</button>
</div>

<!-- render Video -->
<div id="videoContainer"></div>
</div>
<script src="https://sdk.videosdk.live/js-sdk/0.0.67/videosdk.js"></script>
<script src="config.js"></script>
<script src="index.js"></script>

<!-- hls lib script -->
<script src="https://cdn.jsdelivr.net/npm/hls.js"></script>
</body>
</html>

Output​

Step 2: Implement Join Screen​

Set token in config.js file which is generated from here.

config.js
// Auth token we will use to generate a meeting and connect to it
TOKEN = "Your_Token_Here";

Now get all elements from DOM and declare following variables in index.js file and then add Event Listener to the join and create meeting buttons.

Join screen will work as medium to either schedule new meeting or to join existing meeting as a host or as a viewer.

These will have 3 buttons:

1. Join as Host: When this button is clicked, the person will join the entered meetingId as HOST.

2. Join as Viewer: When this button is clicked, the person will join the entered meetingId as VIEWER.

3. New Meeting: When this button is clicked, the person will join a new meeting as HOST.

index.js
// getting Elements from Dom
const joinHostButton = document.getElementById("joinHostBtn");
const joinViewerButton = document.getElementById("joinViewerBtn");
const leaveButton = document.getElementById("leaveBtn");
const startHlsButton = document.getElementById("startHlsBtn");
const stopHlsButton = document.getElementById("stopHlsBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");
const hlsStatusHeading = document.getElementById("hlsStatusHeading");

// declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

const Constants = VideoSDK.Constants;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting As Host Button Event Listener
joinHostButton.addEventListener("click", async () => {
document.getElementById("join-screen").style.display = "none";
textDiv.textContent = "Joining the meeting...";

roomId = document.getElementById("meetingIdTxt").value;
meetingId = roomId;

initializeMeeting(Constants.modes.CONFERENCE);
});

// Join Meeting As Viewer Button Event Listener
joinViewerButton.addEventListener("click", async () => {
document.getElementById("join-screen").style.display = "none";
textDiv.textContent = "Joining the meeting...";

roomId = document.getElementById("meetingIdTxt").value;
meetingId = roomId;

initializeMeeting(Constants.modes.VIEWER);
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () => {
document.getElementById("join-screen").style.display = "none";
textDiv.textContent = "Please wait, we are joining the meeting";

const url = `https://api.videosdk.live/v2/rooms`;
const options = {
method: "POST",
headers: { Authorization: TOKEN, "Content-Type": "application/json" },
};

const { roomId } = await fetch(url, options)
.then((response) => response.json())
.catch((error) => alert("error", error));
meetingId = roomId;

initializeMeeting(Constants.modes.CONFERENCE);
});

Output​

Step 3: Initialize meeting​

Initialise the meeting by mode passed to the function and only create a local participant stream when mode is "CONFERENCE".

index.s
// Initialize meeting
function initializeMeeting(mode) {
window.VideoSDK.config(TOKEN);

meeting = window.VideoSDK.initMeeting({
meetingId: meetingId, // required
name: "Thomas Edison", // required
mode: mode,
});

meeting.join();

meeting.on("meeting-joined", () => {
textDiv.textContent = null;

document.getElementById("grid-screen").style.display = "block";
document.getElementById(
"meetingIdHeading"
).textContent = `Meeting Id: ${meetingId}`;

if (meeting.hlsState === Constants.hlsEvents.HLS_STOPPED) {
hlsStatusHeading.textContent = "HLS has not stared yet";
} else {
hlsStatusHeading.textContent = `HLS Status: ${meeting.hlsState}`;
}

if (mode === Constants.modes.CONFERENCE) {
// we will pin the local participant if he joins in `CONFERENCE` mode
meeting.localParticipant.pin();

document.getElementById("speakerView").style.display = "block";
}
});

meeting.on("meeting-left", () => {
videoContainer.innerHTML = "";
});

meeting.on("hls-state-changed", (data) => {
//
});

if (mode === Constants.modes.CONFERENCE) {
// creating local participant
createLocalParticipant();

// setting local participant stream
meeting.localParticipant.on("stream-enabled", (stream) => {
setTrack(stream, null, meeting.localParticipant, true);
});

// participant joined
meeting.on("participant-joined", (participant) => {
if (participant.mode === Constants.modes.CONFERENCE) {
participant.pin();

let videoElement = createVideoElement(
participant.id,
participant.displayName
);

participant.on("stream-enabled", (stream) => {
setTrack(stream, audioElement, participant, false);
});

let audioElement = createAudioElement(participant.id);
videoContainer.appendChild(videoElement);
videoContainer.appendChild(audioElement);
}
});

// participants left
meeting.on("participant-left", (participant) => {
let vElement = document.getElementById(`f-${participant.id}`);
vElement.remove(vElement);

let aElement = document.getElementById(`a-${participant.id}`);
aElement.remove(aElement);
});
}
}

Output​

Step 4: Speaker Controls​

Next step is to create SpeakerView and Controls componenets to manage features such as join, leave, mute and unmute.

We will get all the participants from meeting object and filter them for the mode set to CONFERENCE so only Speakers are shown on the screen.

index.js
// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () => {
meeting?.leave();
document.getElementById("grid-screen").style.display = "none";
document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () => {
if (isMicOn) {
// Disable Mic in Meeting
meeting?.muteMic();
} else {
// Enable Mic in Meeting
meeting?.unmuteMic();
}
isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () => {
if (isWebCamOn) {
// Disable Webcam in Meeting
meeting?.disableWebcam();

let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
vElement.style.display = "none";
} else {
// Enable Webcam in Meeting
meeting?.enableWebcam();

let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
vElement.style.display = "inline";
}
isWebCamOn = !isWebCamOn;
});

// Start Hls Button Event Listener
startHlsButton.addEventListener("click", async () => {
meeting?.startHls({
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: "20",
},
theme: "LIGHT",
mode: "video-and-audio",
quality: "high",
orientation: "landscape",
});
});

// Stop Hls Button Event Listener
stopHlsButton.addEventListener("click", async () => {
meeting?.stopHls();
});

Step 5: Speaker Media Elements​

In this step, we will create a function that helps us to create audio and video elements for displaying local and remote participants for speaker. We will also set the appropriate media track based on whether it's a video or audio.

index.js
// creating video element
function createVideoElement(pId, name) {
let videoFrame = document.createElement("div");
videoFrame.setAttribute("id", `f-${pId}`);

//create video
let videoElement = document.createElement("video");
videoElement.classList.add("video-frame");
videoElement.setAttribute("id", `v-${pId}`);
videoElement.setAttribute("playsinline", true);
videoElement.setAttribute("width", "300");
videoFrame.appendChild(videoElement);

let displayName = document.createElement("div");
displayName.innerHTML = `Name : ${name}`;

videoFrame.appendChild(displayName);
return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
let audioElement = document.createElement("audio");
audioElement.setAttribute("autoPlay", "false");
audioElement.setAttribute("playsInline", "true");
audioElement.setAttribute("controls", "false");
audioElement.setAttribute("id", `a-${pId}`);
audioElement.style.display = "none";
return audioElement;
}

// creating local participant
function createLocalParticipant() {
let localParticipant = createVideoElement(
meeting.localParticipant.id,
meeting.localParticipant.displayName
);
videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
if (stream.kind == "video") {
isWebCamOn = true;
const mediaStream = new MediaStream();
mediaStream.addTrack(stream.track);
let videoElm = document.getElementById(`v-${participant.id}`);
videoElm.srcObject = mediaStream;
videoElm
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
}
if (stream.kind == "audio") {
if (isLocal) {
isMicOn = true;
} else {
const mediaStream = new MediaStream();
mediaStream.addTrack(stream.track);
audioElement.srcObject = mediaStream;
audioElement
.play()
.catch((error) => console.error("audioElem.play() failed", error));
}
}
}

Output​

Step 6: Implement ViewerView​

When host start the live streaming, viewer will be able to see the live streaming.

To implement player view, we are going to use hls.js. It will be helpful to play hls stream.We have already added script of hls.js in index.html file.

Now on the hls-state-changed event, when participant mode is set to VIEWER and the status of hls is HLS_PLAYABLE, we will pass the downstreamUrl to the hls.js and play it.

index.js
// Initialize meeting
function initializeMeeting() {
// ...

// hls-state-chnaged event
meeting.on("hls-state-changed", (data) => {
const { status } = data;

hlsStatusHeading.textContent = `HLS Status: ${status}`;

if (mode === Constants.modes.VIEWER) {
if (status === Constants.hlsEvents.HLS_PLAYABLE) {
const { downstreamUrl } = data;
let video = document.createElement("video");
video.setAttribute("width", "100%");
video.setAttribute("muted", "false");
// enableAutoPlay for browser autoplay policy
video.setAttribute("autoplay", "true");

if (Hls.isSupported()) {
var hls = new Hls({
maxLoadingDelay: 1, // max video loading delay used in automatic start level selection
defaultAudioCodec: "mp4a.40.2", // default audio codec
maxBufferLength: 0, // If buffer length is/become less than this value, a new fragment will be loaded
maxMaxBufferLength: 1, // Hls.js will never exceed this value
startLevel: 0, // Start playback at the lowest quality level
startPosition: -1, // set -1 playback will start from intialtime = 0
maxBufferHole: 0.001, // 'Maximum' inter-fragment buffer hole tolerance that hls.js can cope with when searching for the next fragment to load.
highBufferWatchdogPeriod: 0, // if media element is expected to play and if currentTime has not moved for more than highBufferWatchdogPeriod and if there are more than maxBufferHole seconds buffered upfront, hls.js will jump buffer gaps, or try to nudge playhead to recover playback.
nudgeOffset: 0.05, // In case playback continues to stall after first playhead nudging, currentTime will be nudged evenmore following nudgeOffset to try to restore playback. media.currentTime += (nb nudge retry -1)*nudgeOffset
nudgeMaxRetry: 1, // Max nb of nudge retries before hls.js raise a fatal BUFFER_STALLED_ERROR
maxFragLookUpTolerance: .1, // This tolerance factor is used during fragment lookup.
liveSyncDurationCount: 1, // if set to 3, playback will start from fragment N-3, N being the last fragment of the live playlist
abrEwmaFastLive: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
abrEwmaSlowLive: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
abrEwmaFastVoD: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
abrEwmaSlowVoD: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
maxStarvationDelay: 1, // ABR algorithm will always try to choose a quality level that should avoid rebuffering
});
hls.loadSource(downstreamUrl);
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function () {
video.play();
});
} else if (video.canPlayType("application/vnd.apple.mpegurl")) {
video.src = downstreamUrl;
video.addEventListener("canplay", function () {
video.play();
});
}

videoContainer.appendChild(video);
}

if (status === Constants.hlsEvents.HLS_STOPPING) {
videoContainer.innerHTML = "";
}
}
});
}

Output​

Final Output​

We are done with implementation of customised video calling app in Javascript using Video SDK. To explore more features go through Basic and Advanced features.

tip

You can checkout the complete quick start example here.

Got a Question? Ask us on discord