Skip to main content

OpenAI STT

The OpenAI STT provider enables your agent to use OpenAI's speech-to-text models (like Whisper) for converting audio input to text.

Installation​

Install the OpenAI-enabled VideoSDK Agents package:

pip install "videosdk-plugins-openai"

Importing​

from videosdk.plugins.openai import OpenAISTT

Example Usage​

from videosdk.plugins.openai import OpenAISTT
from videosdk.agents import CascadingPipeline

# Initialize the OpenAI STT model
stt = OpenAISTT(
# When OPENAI_API_KEY is set in .env - DON'T pass api_key parameter
api_key="your-openai-api-key",
model="whisper-1",
language="en",
prompt="Transcribe this audio with proper punctuation and formatting."
)

# Add stt to cascading pipeline
pipeline = CascadingPipeline(stt=stt)
note

When using .env file for credentials, don't pass them as arguments to model instances or context objects. The SDK automatically reads environment variables, so omit api_key, videosdk_auth, and other credential parameters from your code.

Configuration Options​

  • api_key: Your OpenAI API key (required, can also be set via environment variable)
  • model: The OpenAI STT model to use (e.g., "whisper-1", "gpt-4o-mini-transcribe")
  • base_url: Custom base URL for OpenAI API (optional)
  • prompt: (str) Custom prompt to guide transcription style and format
  • language: (str) Language code for transcription (default: "en")
  • turn_detection: (dict) Configuration for detecting conversation turns

Got a Question? Ask us on discord