Skip to main content
Version: 2.0.x

Background Blur - iOS

Want to hide distractions behind you during video calls and maintain a professional or clutter-free appearance? Background blur with VideoSDK can help!

This guide explains how to achieve background blur in your iOS video calling app using the VideoSDKVideoProcessor protocol.

How Background Blur Works

Background blur relies on a process called person segmentation. This involves intelligently separating you (the foreground) from the background in the video frame. Once the segmentation is done, the background is blurred while your image remains clear.

Implementation Steps

Here's a breakdown of how to implement background blur in your app:

1. Process Video Frames with onFrameReceived

Within the onFrameReceived function provided by the VideoSDKVideoProcessor protocol, you'll perform the core background blur processing for each video frame:

  • Convert the Frame: Transform the raw video frame from VideoSDK into a format suitable for processing (CVPixelBuffer).

  • Segment the Person (iOS 15.0+): (If your app supports iOS 15.0 or later) Leverage ARKit's person segmentation API to intelligently separate you (the foreground) from the background.

  • Blur the Background: Create a blurred version of the background to achieve the desired effect.

  • Composite the Image: Seamlessly combine the blurred background with your clear image using a compositing technique.

  • Return the Processed Frame: Convert the processed image back into a format that VideoSDK can use (RTCVideoFrame) and return it for transmission.

Code Example:

This code demonstrates a basic implementation of background blur using VideoSDKVideoProcessor

import ARKit
import CoreImage
import CoreImage.CIFilterBuiltins
import Foundation

// class conforming to VideoSDKVideoProcessor protocol.
class MyVideoProcessor: VideoSDKVideoProcessor {

func onFrameReceived(frame: RTCVideoFrame) -> RTCVideoFrame? {

// 1. Convert the Frame: Transform raw frame to CVPixelBuffer
let buffer = frame.buffer as! RTCCVPixelBuffer
let pixelBuffer = buffer.pixelBuffer

// 2. Segment the Person (iOS 15.0+): Separate foreground (you) from background
if #available(iOS 15.0, *) {
let request = VNGeneratePersonSegmentationRequest()
request.qualityLevel = .balanced // Use balanced for smooth video experience
let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])

do {
try requestHandler.perform([request])
request.outputPixelFormat = kCVPixelFormatType_OneComponent8
guard let result = request.results?.first as? VNPixelBufferObservation else {
return nil
}

let maskPixelBuffer = result.pixelBuffer

// 3. Blur the Background: Create a blurred version of the background
if let compositedPixelBuffer = compositeImage(originalPixelBuffer: pixelBuffer, maskPixelBuffer: maskPixelBuffer) {

// 4. Composite the Image: Combine clear image and blurred background
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: compositedPixelBuffer)
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: frame.rotation, timeStampNs: frame.timeStampNs)
return rtcVideoFrame;
} else {
}
} catch {
print("Error performing person segmentation request: \(error)")
}
}
return nil
}

@available(iOS 13.0, *)
func compositeImage(originalPixelBuffer: CVPixelBuffer, maskPixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
//original image
let ciImage = CIImage(cvPixelBuffer: originalPixelBuffer)
//masked image
let maskCIImage = CIImage(cvPixelBuffer: maskPixelBuffer)

//scale masked image
let maskScaleX = ciImage.extent.width / maskCIImage.extent.width
let maskScaleY = ciImage.extent.height / maskCIImage.extent.height
let maskScaled = maskCIImage.transformed(by: __CGAffineTransformMake(maskScaleX, 0, 0, maskScaleY, 0, 0))

//apply blur to the original image
let blurredImage = ciImage.applyingGaussianBlur(sigma: 2.0) //tweak sigma to adjust blur amount

let blendFilter = CIFilter.blendWithMask()
blendFilter.inputImage = ciImage
blendFilter.backgroundImage = blurredImage
blendFilter.maskImage = maskScaled

let blendedImage = blendFilter.outputImage
let ciContext = CIContext(options: nil)
let filteredImageRef = ciContext.createCGImage(blendedImage!, from: blendedImage!.extent)
let maskDisplayRef = ciContext.createCGImage(maskScaled, from: maskScaled.extent)

var outputPixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
let width = Int(blendedImage!.extent.width)
let height = Int(blendedImage!.extent.height)

let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attrs, &outputPixelBuffer)

guard status == kCVReturnSuccess, let buffer = outputPixelBuffer else {
return nil
}
ciContext.render(blendedImage!, to: buffer)
return buffer
}
}

2. Activating Background Blur

Once you've created your MyVideoProcessor class, you can enable background blur in your video calls using the setVideoProcessor method of the Meeting class. This essentially tells VideoSDK to utilize your custom processor for processing the video stream before transmission.

class MeetingViewController {

// Indicates whether the background is currently blurred
let isBackgroundBlurred: Bool = false

// Button action that toggles the blur effects for the local participant
@IBAction func toggleBackgroundBlur(_ sender: Any) {

// If the background is already blurred, remove the video processor
guard isBackgroundBlurred == false else {
meeting.setVideoProcessor(processor: nil)
self.isBackgroundBlurred = false
return
}

// Otherwise, set the video processor to enable background blur
let processor = MyVideoProcessor()
meeting.setVideoProcessor(processor: processor)
self.isBackgroundBlurred = true
}
}

API Reference

The API references for all the methods utilized in this guide are provided below.

Got a Question? Ask us on discord