Skip to main content

Anthropic LLM

The Anthropic AI LLM provider enables your agent to use Anthropic AI's language models for text-based conversations and processing.

Installation​

Install the Anthropic-enabled VideoSDK Agents package:

pip install "videosdk-plugins-anthropic"

Importing​

from videosdk.plugins.anthropic import AnthropicLLM

Authentication​

The Anthropic plugin requires an Anthropic API key.

Set ANTHROPIC_API_KEY in your .env file.

Example Usage​

from videosdk.plugins.anthropic import AnthropicLLM
from videosdk.agents import CascadingPipeline

# Initialize the Anthropic LLM model
llm = AnthropicLLM(
model="claude-sonnet-4-20250514",
temperature=0.7,
max_tokens=1024,
)

# Add llm to cascading pipeline
pipeline = CascadingPipeline(llm=llm)
note

When using .env file for credentials, don't pass them as arguments to model instances or context objects. The SDK automatically reads environment variables, so omit api_key and other credential parameters from your code.

Configuration Options​

  • model: (str) The Anthropic model to use (default: "claude-sonnet-4-20250514").
  • api_key: (str) Your Anthropic API key. Can also be set via the ANTHROPIC_API_KEY environment variable.
  • base_url: (str) Optional custom base URL for Claude API (default: None).
  • temperature: (float) Sampling temperature for response randomness (default: 0.7).
  • tool_choice: (ToolChoice) Tool selection mode ("auto", "required", "none") (default: "auto").
  • max_tokens: (int) Maximum number of tokens in the response (default: 1024).
  • top_p: (float) Nucleus sampling probability (optional).
  • top_k: (int) Top-k sampling parameter (optional).

Additional Resources​

The following resources provide more information about using Anthropic with VideoSDK Agents SDK.

Got a Question? Ask us on discord