OpenAI LLM
The OpenAI LLM provider enables your agent to use OpenAI's language models (like GPT-4o) for text-based conversations and processing.
Installation​
Install the OpenAI-enabled VideoSDK Agents package:
pip install "videosdk-plugins-openai"
Importing​
from videosdk.plugins.openai import OpenAILLM
Example Usage​
from videosdk.plugins.openai import OpenAILLM
from videosdk.agents import CascadingPipeline
# Initialize the OpenAI LLM model
llm = OpenAILLM(
model="gpt-4o",
# When OPENAI_API_KEY is set in .env - DON'T pass api_key parameter
api_key="your-openai-api-key",
temperature=0.7,
tool_choice="auto",
max_completion_tokens=1000
)
# Add llm to cascading pipeline
pipeline = CascadingPipeline(llm=llm)
note
When using .env file for credentials, don't pass them as arguments to model instances or context objects. The SDK automatically reads environment variables, so omit api_key, videosdk_auth, and other credential parameters from your code.
Configuration Options​
model
: The OpenAI model to use (e.g.,"gpt-4o"
,"gpt-4o-mini"
,"gpt-3.5-turbo"
)api_key
: Your OpenAI API key (can also be set via environment variable)base_url
: Custom base URL for OpenAI API (optional)temperature
: (float) Sampling temperature for response randomness (0.0 to 2.0, default: 0.7)tool_choice
: Tool selection mode (e.g.,"auto"
,"none"
, or specific tool)max_completion_tokens
: (int) Maximum number of tokens in the completion response
Got a Question? Ask us on discord