Display Audio and Video
In this guide we will take a look at how to render the participant's audio and video on the screen.
Rendering Participant
The three steps are involve to achieve this process.
1. Get Mic and Webcam Status
We must determine whether the participant's audio or video is on or off before rendering him or her. Hence, if the webcam is not turned on, we will begin by rendering the participant's frames with their name in them; otherwise, we will render the video.
Step 1:
Let's get every participants
from useMeeting
and create a straightforward box with each person's name.
const MeetingView = () => {
//Getting all the participants
const { participants } = useMeeting();
//Looping over the participants and rendering a simple
return (
<div style={{ display: "grid", gridTemplateColumns: "repeat(3,1fr)" }}>
{[...participants.keys()].map((participantId, index) => (
<ParticipantView key={index} participantId={participantId} />
))}
</div>
);
};
// This will render a single participant's view
const ParticipantView = ({ participantId }) => {
const { displayName } = useParticipant(participantId);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
<p>{displayName}</p>
</div>
);
};
Step 2:
To display the status of each participant's microphone and webcam in the grid, you can use the micOn
and webcamOn
properties of the useParticipant
hook.
Here's a code code snippet of rendering mic and webcam status:
const ParticipantView = ({ participantId }) => {
//Getting the micOn and webcamOn property
const { displayName, micOn, webcamOn } = useParticipant(participantId);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
<p>{displayName}</p>
<p>
Webcam:{webcamOn ? "On" : "Off"} Mic: {micOn ? "On" : "Off"}
</p>
</div>
);
};
2. Rendering Video
The status of the webcam
and mic
is now displayed. If the webcam is turned on
, we will require the participant's webcamStream
which we will obtain from the useParticipant
hook, in order to display the participant's video.
Step 1:
Let's get the webcamStream
and define a <video>
tag which will render the video of the participant. We will use the useRef
to create a reference to this video tag.
import { useRef } from "react";
const ParticipantView = ({ participantId }) => {
//Getting the webcamStream property
const { displayName, micOn, webcamOn, webcamStream } =
useParticipant(participantId);
const webcamRef = useRef(null);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
<p>...</p>
<video width={"100%"} height={"100%"} ref={webcamRef} autoPlay />
</div>
);
};
Step 2:
Now that we have our <video>
element in place, we will add a useEffect
so that, when the webcamStream
is discovered, it will immediately add to the <video>
element.
const ParticipantView = ({ participantId }) => {
//Getting the webcamStream property
const { displayName, micOn, webcamOn, webcamStream } =
useParticipant(participantId);
const webcamRef = useRef(null);
useEffect(() => {
if (webcamRef.current) {
if (webcamOn && webcamStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
webcamRef.current.srcObject = mediaStream;
webcamRef.current
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
} else {
webcamRef.current.srcObject = null;
}
}
}, [webcamStream, webcamOn]);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
...
</div>
);
};
2.1 Maintaining the aspect ratio
If you wish to maintain the aspect ratio of the video, meaning showing vertical video and not making it fill the complete space of the view, you can set the object-fit: contain
.
If you wish to always fill the view irrespective of the video resolution you can set the object-fit:cover
.
const ParticipantView = ({ participantId }) => {
//... Other video configurations
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
objectFit: "contain",
}}
>
...
</div>
);
};
2.2 Mirror Local Video View
If you wish to show the mirror view of the local participant, you can apply the transformation style to the participant's view.
const ParticipantView = ({ participantId }) => {
const { isLocal } = useParticipant(participantId);
//... Other video configurations
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
objectFit: "contain",
}}
>
...
<video
width={"100%"}
height={"100%"}
ref={webcamRef}
autoPlay
style={
isLocal
? { transform: "scaleX(-1)", WebkitTransform: "scaleX(-1)" }
: {}
}
/>
</div>
);
};
Sample of mirror view video
3. Rendering Audio
Now we have displayed the webcam and mic status along with the video of the particiapnt. If the mic is turned on
we will need the micStream
of the participant which we will obtain from the useParticipant
hook, in order to play the participant's audio.
Step 1:
Let's get the micStream
and define a <audio>
tag which will render the audio of the participant. We will use the useRef
to create a reference to this audio tag.
import { useRef } from "react";
const ParticipantView = ({ participantId }) => {
//Getting the micStream property
const { displayName, micOn, webcamOn, webcamStream, micStream } =
useParticipant(participantId);
const audioRef = useRef(null);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
<p>...</p>
<audio ref={audioRef} autoPlay />
</div>
);
};
Step 2:
Now that we have our <audio>
element in place, we will add a useEffect so that, when the micStream
is discovered, it will immediately add to the <audio>
element.
const ParticipantView = ({ participantId }) => {
//Getting the webcamStream property
const { displayName, micOn, webcamOn, webcamStream } =
useParticipant(participantId);
// ... webcam stream dispalying here
const micRef = useRef(null);
useEffect(() => {
if (micRef.current) {
if (micOn && micStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(micStream.track);
micRef.current.srcObject = mediaStream;
micRef.current
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
} else {
micRef.current.srcObject = null;
}
}
}, [micStream, micOn]);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
...
</div>
);
};
While rendering the audio, we should not render the audio of local participant as it will create the echo.
So to solve that we will mute the audio
of the localParticipant
const ParticipantView = ({ participantId }) => {
//Getting the isLocal property
const { displayName, micOn, webcamOn, webcamStream, micStream, isLocal } =
useParticipant(participantId);
const audioRef = useRef(null);
return (
<div
style={{
height: "300px",
background: "#C0C2C9",
}}
>
<p>...</p>
<audio ref={audioRef} autoPlay muted={isLocal} />
</div>
);
};
Autoplay Audio and Video
autoplay
refers to the parameter which is passed to <audio>
and <video>
whose media should be played automatically without user clicking on the video or hitting the play button.
While building a audio-video calling app, it is necessary to mae sure that autoplay
flag is set to true
so that any media loaded is played although the play()
was not called.
You can learn more about the autoplay flag
at the official documentation.
API Reference
The API references for all the methods and events utilised in this guide are provided below.