Sarvam AI LLM
The Sarvam AI LLM provider enables your agent to use Sarvam AI's language models for text-based conversations and processing.
Installation
Install the Sarvam AI-enabled VideoSDK Agents package:
pip install "videosdk-plugins-sarvamai"
Importing
from videosdk.plugins.sarvamai import SarvamAILLM
When using Sarvam AI as the LLM option, the function tool calls and MCP tool will not work.
Authentication
The Sarvam plugin requires a Sarvam API key.
Set SARVAM_API_KEY
in your .env
file.
Example Usage
from videosdk.plugins.sarvamai import SarvamAILLM
from videosdk.agents import CascadingPipeline
# Initialize the Sarvam AI LLM model
llm = SarvamAILLM(
model="sarvam-m",
# When SARVAMAI_API_KEY is set in .env - DON'T pass api_key parameter
api_key="your-sarvam-ai-api-key",
temperature=0.7,
tool_choice="auto",
max_completion_tokens=1000
)
# Add llm to cascading pipeline
pipeline = CascadingPipeline(llm=llm)
When using .env file for credentials, don't pass them as arguments to model instances or context objects. The SDK automatically reads environment variables, so omit api_key
and other credential parameters from your code.
Configuration Options
model
: (str) The Sarvam AI model to use (default:"sarvam-m"
).api_key
: (str) Your Sarvam AI API key. Can also be set via theSARVAMAI_API_KEY
environment variable.temperature
: (float) Sampling temperature for response randomness (default:0.7
).tool_choice
: (ToolChoice) Tool selection mode (default:"auto"
).max_completion_tokens
: (int) Maximum number of tokens in the completion response (optional).
Additional Resources
The following resources provide more information about using Sarvam AI with VideoSDK Agents SDK.
-
Python package: The
videosdk-plugins-sarvamai
package on PyPI. -
GitHub repo: View the source or contribute to the VideoSDK Sarvam AI LLM plugin.
-
Sarvam docs: Sarvam's full docs site.
Got a Question? Ask us on discord