Azure OpenAI LLM
The Azure OpenAI LLM provider enables your agent to use Azure OpenAI's language models (like GPT-4o) for text-based conversations and processing. It also supports vision input capabilities, allowing your agent to analyze and respond to images alongside text with the supported models.
Installation
Install the Azure OpenAI-enabled VideoSDK Agents package:
pip install "videosdk-plugins-openai"
Importing
from videosdk.plugins.openai import OpenAILLM
Authentication
The Azure OpenAI plugin requires an Azure OpenAI API key.
Set the following in your .env file:
AZURE_OPENAI_API_KEY=your-azure-api-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
OPENAI_API_VERSION=2024-02-01
Example Usage
from videosdk.plugins.openai import OpenAILLM
from videosdk.agents import Pipeline
llm = OpenAILLM.azure(
azure_deployment="gpt-4o",
temperature=0.7,
seed=42,
parallel_tool_calls=True,
)
pipeline = Pipeline(llm=llm)
note
When using a .env file for credentials, don't pass them as arguments to model instances. The SDK automatically reads environment variables, so omit api_key, videosdk_auth, and other credential parameters from your code.
Configuration Options
Core
azure_deployment— The Azure OpenAI deployment ID (defaults to themodelvalue, e.g."gpt-4o","gpt-4o-mini").api_key— Your Azure OpenAI API key. Falls back to theAZURE_OPENAI_API_KEYenvironment variable.azure_endpoint— Your Azure OpenAI endpoint URL. Falls back toAZURE_OPENAI_ENDPOINT.api_version— Azure OpenAI API version. Falls back toOPENAI_API_VERSION.azure_ad_token— Azure AD bearer token (alternative toapi_key). Falls back toAZURE_OPENAI_AD_TOKEN.organization— OpenAI organisation ID. Falls back toOPENAI_ORG_ID(optional).project— OpenAI project ID. Falls back toOPENAI_PROJECT_ID(optional).base_url— Override the default API base URL (optional).temperature— Sampling temperature (0.0 – 2.0). Default:0.7.tool_choice— Tool selection mode:"auto","required","none", or a dict to force a specific tool. Default:"auto".max_completion_tokens— Maximum tokens in the completion response (optional).
Generation knobs
top_p— Nucleus sampling probability mass (float, optional).frequency_penalty— Penalises repeated tokens by frequency (float, -2.0 – 2.0, optional).presence_penalty— Penalises tokens already present in the response (float, -2.0 – 2.0, optional).seed— Integer seed for deterministic sampling (optional).
Tool calling
parallel_tool_calls— Allow (True) or disallow (False) multiple tool calls per turn (optional).
Advanced Example
from videosdk.plugins.openai import OpenAILLM
from videosdk.agents import Pipeline
llm = OpenAILLM.azure(
azure_deployment="gpt-4o",
temperature=0.7,
top_p=0.95,
frequency_penalty=0.1,
seed=42,
parallel_tool_calls=True,
max_completion_tokens=2048,
)
pipeline = Pipeline(llm=llm)
Additional Resources
Got a Question? Ask us on discord

