OpenAI LLM
The OpenAI LLM provider enables your agent to use OpenAI's language models (like GPT-4o) for text-based conversations and processing. It also supports vision input capabilities, allowing your agent to analyze and respond to images alongside text with the supported models.
Installation
Install the OpenAI-enabled VideoSDK Agents package:
pip install "videosdk-plugins-openai"
Importing
from videosdk.plugins.openai import OpenAILLM
Authentication
The OpenAI plugin requires an OpenAI API key.
Set OPENAI_API_KEY in your .env file.
Example Usage
from videosdk.plugins.openai import OpenAILLM
from videosdk.agents import Pipeline
llm = OpenAILLM(
model="gpt-4o",
temperature=0.7,
top_p=0.95,
seed=42,
parallel_tool_calls=True,
max_completion_tokens=1024,
)
pipeline = Pipeline(llm=llm)
note
When using a .env file for credentials, don't pass them as arguments to model instances. The SDK automatically reads environment variables, so omit api_key and other credential parameters from your code.
Configuration Options
Core
model— The OpenAI model to use (e.g."gpt-4o","gpt-4o-mini"). Default:"gpt-4o-mini".api_key— Your OpenAI API key. Falls back to theOPENAI_API_KEYenvironment variable.base_url— Custom base URL for the OpenAI API (optional).temperature— Sampling temperature (0.0 – 2.0). Default:0.7.tool_choice— Tool selection mode:"auto","required","none", or a dict{"type": "function", "function": {"name": "my_tool"}}to force a specific tool. Default:"auto".max_completion_tokens— Maximum tokens in the completion response (optional).
Generation knobs
top_p— Nucleus sampling: only the tokens comprising the toptop_pprobability mass are considered (float, optional).frequency_penalty— Penalises tokens that appear frequently in the response so far; reduces repetition (float, -2.0 – 2.0, optional).presence_penalty— Penalises tokens that have appeared at all in the response so far; encourages new topics (float, -2.0 – 2.0, optional).seed— Integer seed for deterministic sampling. The same seed + same inputs will produce the same output (optional).
Organisation and project
organization— Your OpenAI organisation ID. Falls back to theOPENAI_ORG_IDenvironment variable (optional).project— Your OpenAI project ID. Falls back to theOPENAI_PROJECT_IDenvironment variable (optional).
Tool calling
parallel_tool_calls— WhenTrue, allows the model to call multiple tools in a single turn. WhenFalse, forces one tool call at a time. Default: provider default (optional).
Advanced Example
from videosdk.plugins.openai import OpenAILLM
from videosdk.agents import Pipeline
llm = OpenAILLM(
model="gpt-4o",
temperature=0.7,
top_p=0.95,
frequency_penalty=0.1,
presence_penalty=0.1,
seed=42,
parallel_tool_calls=True,
max_completion_tokens=2048,
)
pipeline = Pipeline(llm=llm)
Additional Resources
- OpenAI docs: OpenAI documentation.
Got a Question? Ask us on discord

