Skip to main content

Sarvam AI LLM

The Sarvam AI LLM provider enables your agent to use Sarvam AI's language models for text-based conversations and processing.

Installation​

Install the Sarvam AI-enabled VideoSDK Agents package:

pip install "videosdk-plugins-sarvamai"

Importing​

from videosdk.plugins.sarvamai import SarvamAILLM
note

When using Sarvam AI as the LLM option, the function tool calls and MCP tool will not work.

Example Usage​

from videosdk.plugins.sarvamai import SarvamAILLM
from videosdk.agents import CascadingPipeline

# Initialize the Sarvam AI LLM model
llm = SarvamAILLM(
model="sarvam-m",
# When SARVAMAI_API_KEY is set in .env - DON'T pass api_key parameter
api_key="your-sarvam-ai-api-key",
temperature=0.7,
tool_choice="auto",
max_completion_tokens=1000
)

# Add llm to cascading pipeline
pipeline = CascadingPipeline(llm=llm)
note

When using .env file for credentials, don't pass them as arguments to model instances or context objects. The SDK automatically reads environment variables, so omit api_key and other credential parameters from your code.

Configuration Options​

  • model: (str) The Sarvam AI model to use (default: "sarvam-m").
  • api_key: (str) Your Sarvam AI API key. Can also be set via the SARVAMAI_API_KEY environment variable.
  • temperature: (float) Sampling temperature for response randomness (default: 0.7).
  • tool_choice: (ToolChoice) Tool selection mode (default: "auto").
  • max_completion_tokens: (int) Maximum number of tokens in the completion response (optional).

Got a Question? Ask us on discord