# Video SDK ## Docs - [A2A Implementation Guide](https://docs.videosdk.live/ai_agents/a2a/implementation): Complete implementation guide for building Agent to Agent (A2A) systems with VideoSDK AI Agents. Learn to create customer service and specialist agents that collaborate seamlessly using real-world examples. - [Agent to Agent (A2A)](https://docs.videosdk.live/ai_agents/a2a/overview): Understanding the core concepts of Agent to Agent (A2A) communication in VideoSDK AI Agents - AgentCard, A2AMessage, agent registration, and discovery mechanisms for building collaborative multi-agent systems. - [Build a Custom Voice AI Agent in Minutes](https://docs.videosdk.live/ai_agents/agent-runtime/build-agent): Use VideoSDK's low-code builder to design, test, and deploy a personalized voice agent powered by your preferred LLM. - [Agent Runtime with Flutter](https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-flutter): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using Flutter frontend and a no-code agent from the dashboard. - [Agent Runtime with iOS](https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-ios): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using an iOS frontend and a no-code agent from the dashboard. - [Agent Runtime with React Native](https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-react-native): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using a React Native frontend and a no-code agent from the dashboard. - [Agent Runtime with JavaScript](https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/web-integrations/with-javascript): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using JavaScript frontend and a no-code backend. - [Agent Runtime with React](https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/web-integrations/with-react): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using React frontend and a no-code backend. - [SIP](https://docs.videosdk.live/ai_agents/agent-sip): A framework for creating AI-powered voice agents using VideoSDK and various SIP providers - [Playground](https://docs.videosdk.live/ai_agents/agents-playground): Test and interact with your VideoSDK AI agents in real-time using Playground mode. Learn how to enable the interactive testing environment for rapid development and debugging of voice AI agents. - [AI Agent with Flutter - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-flutter): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using Flutter frontend and Python agent. - [AI Agent with iOS - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-ios): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using iOS Swift frontend and Python backend. - [AI Agent with IoT - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-iot): Integrate a real-time AI agent with an ESP32 device using VideoSDK, enabling voice-based interaction through Google Gemini Live API. - [AI Agent with JavaScript - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-js): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using JavaScript frontend. - [AI Agent with React - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-react): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using React frontend and Python backend. - [AI Agent with React Native - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-react-native): VideoSDK enables the opportunity to integrate AI agents with real-time voice interaction using a React Native frontend and Python backend. - [AI Agent with Unity - Quick Start](https://docs.videosdk.live/ai_agents/ai-agent-quick-start-unity): Integrate a real-time AI agent with Unity using VideoSDK, enabling voice-based interaction through Google Gemini Live API. - [AI Telephony Agent Quick Start](https://docs.videosdk.live/ai_agents/ai-phone-agent-quick-start): A comprehensive guide to creating a fully functional AI telephony agent using VideoSDK Agent SDK. Learn how to run the agent locally, connect it to the global telephone network using SIP, and enable it to handle both inbound and outbound phone calls. - [AI Voice Agent Quick Start](https://docs.videosdk.live/ai_agents/ai-voice-agent-quick-start): A step-by-step guide to quickly integrate an AI-powered voice agent into your VideoSDK meetings using the AI Agent SDK. Covers prerequisites, installation, custom agent creation, function tools, pipeline setup, and session management. - [Authentication and Token | Video SDK](https://docs.videosdk.live/ai_agents/authentication-and-tokens): Video SDK and Audio SDK, developers need to implement a token server. This requires efforts on both the front-end and backend. - [Console Mode for AI Agents](https://docs.videosdk.live/ai_agents/console-mode): Learn how to use VideoSDK AI Agents in console mode for direct terminal-based voice interactions without joining a meeting room. - [Agent](https://docs.videosdk.live/ai_agents/core-components/agent): Learn about the `Agent` base class in the VideoSDK AI Agent SDK. Understand how to create custom agents, define system prompts, manage state, and register function tools. - [Agent Session](https://docs.videosdk.live/ai_agents/core-components/agent-session): Discover how the `AgentSession` in VideoSDK's AI Agent SDK orchestrates various components into a unified workflow, managing the agent's interaction lifecycle and context for seamless real-time communication. - [Avatar](https://docs.videosdk.live/ai_agents/core-components/avatar): Learn how to add virtual avatars to your VideoSDK AI Agents. Understand avatar integration, configuration, and how to create lifelike visual representations for your agents. - [Background Audio](https://docs.videosdk.live/ai_agents/core-components/background-audio): Learn about Background Audio in the VideoSDK AI Agent SDK. Enable ambient sounds, thinking audio, and background music to enhance conversational experiences. - [Call Transfer](https://docs.videosdk.live/ai_agents/core-components/call-transfer): Learn how to enable your AI Agent to seamlessly transfer a live SIP call to a different phone number - [Cascading Pipeline](https://docs.videosdk.live/ai_agents/core-components/cascading-pipeline): Explore the `Cascading Pipeline` component in the VideoSDK AI Agent SDK. Learn how it manages AI models (like OpenAI and Gemini), configurations, streaming audio, and multi-modal capabilities. - [Conversation Flow](https://docs.videosdk.live/ai_agents/core-components/conversation-flow): Explore the `Conversation Flow` component in the VideoSDK AI Agent SDK. Learn how it manages turn taking in the Agents - [Conversational Graph](https://docs.videosdk.live/ai_agents/core-components/conversational-graph): Learn how to use Conversational Graph to build structured, state-based conversation flows for your AI agents. - [De-noise](https://docs.videosdk.live/ai_agents/core-components/de-noise): Learn how to enhance voice quality by removing background noise in VideoSDK AI Agents. Implement real-time audio denoising for clearer conversations. - [DTMF Events](https://docs.videosdk.live/ai_agents/core-components/dtmf-events): Learn how to enable and listen to DTMF (Dual-Tone Multi-Frequency) events in VideoSDK AI Agents. - [Fallback Adapter](https://docs.videosdk.live/ai_agents/core-components/fallback-adapter): Learn about Fallback and recovery for STT, LLM, and TTS providers in VideoSDK AI Agents. - [Memory](https://docs.videosdk.live/ai_agents/core-components/memory): Enable your VideoSDK AI Agents with long-term memory to create personalized, context-aware conversations. This guide covers integrating memory providers like Mem0, retrieving context, and enhancing user experience. - [Multi Agent Switching](https://docs.videosdk.live/ai_agents/core-components/multi-agent-switching): Learn how to switch between multiple specialized agents in VideoSDK for context-aware workflow using real world examples - [Overview](https://docs.videosdk.live/ai_agents/core-components/overview): Get an overview of the VideoSDK AI Agent SDK, a framework for building AI agents for real-time conversations. Learn about its core components: Agent, Pipeline, and Agent Session. - [Preemptive Response](https://docs.videosdk.live/ai_agents/core-components/preemtive-response): Learn how to enable Preemptive generation for faster STT responses using the VideoSDK AI Agent SDK. - [Pub/Sub Messaging](https://docs.videosdk.live/ai_agents/core-components/pubsub-messaging): Learn how to implement real-time, bidirectional communication between your VideoSDK AI Agent and client applications using Pub/Sub messaging. This guide covers sending and receiving messages, handling events, and practical use cases. - [RAG (Retrieval-Augmented Generation)](https://docs.videosdk.live/ai_agents/core-components/rag): Learn how to implement Retrieval-Augmented Generation (RAG) with VideoSDK AI Agents to enhance your agent's knowledge base with external documents, databases, and real-time information retrieval capabilities. - [Realtime Pipeline](https://docs.videosdk.live/ai_agents/core-components/realtime-pipeline): Explore the `Realtime Pipeline` component in the VideoSDK AI Agent SDK. Learn how it manages AI models (like OpenAI and Gemini), configurations, streaming audio, and multi-modal capabilities. - [Recording](https://docs.videosdk.live/ai_agents/core-components/recording): Learn how to enable the recording functionality with VideoSDK AI Agents for agent sessions and user interactions. - [RoomOptions](https://docs.videosdk.live/ai_agents/core-components/room-options): Learn how to configure RoomOptions for VideoSDK AI Agents to customize meeting connection, agent behavior, and session management. - [Speech Handle](https://docs.videosdk.live/ai_agents/core-components/speech-handle): Learn about Speech Handle in the VideoSDK AI Agent SDK. Understand how to control agent speech at both session and utterance levels, manage interruptions, and coordinate sequential speech playback. - [Testing and Evaluation](https://docs.videosdk.live/ai_agents/core-components/testing-and-evaluation): Learn how to test and evaluate your AI agents using the VideoSDK Agent SDK. Measure latency, accuracy, and reasoning capabilities. - [Turn Detection & Voice Activity Detection (VAD)](https://docs.videosdk.live/ai_agents/core-components/turn-detection-and-vad): Learn about Turn Detection in the VideoSDK AI Agent SDK. Understand Voice Activity Detection (VAD), End-of-Utterance (EOU) detection, and how to implement natural conversation flow in your AI agents. - [Utterance Handle](https://docs.videosdk.live/ai_agents/core-components/utterence-handle): Learn about UtteranceHandle in the VideoSDK AI Agent SDK. Understand how to manage agent utterances, prevent overlapping speech, and handle user interruptions gracefully. - [Vision & Multi-modality](https://docs.videosdk.live/ai_agents/core-components/vision-and-multi-modality): Learn how to add vision and multi-modal capabilities to your VideoSDK AI Agents. Understand image processing, live video input, and multi-modal conversation flows. - [Voice Mail Detection](https://docs.videosdk.live/ai_agents/core-components/voice-mail-detection): Learn how VideoSDK AI agents detect voicemail systems during outbound calls and take actions such as leaving a voicemail message or ending the call - [Worker](https://docs.videosdk.live/ai_agents/core-components/worker): The `Worker` class in VideoSDK's AI Agent SDK serves as the central orchestrator that manages job execution, backend registration, and agent lifecycle coordination. It handles task execution through configurable process/thread executors, manages VideoSDK room connections, and coordinates between agents, pipelines, and infrastructure components for seamless real-time AI communication. - [Deploy Your Agents](https://docs.videosdk.live/ai_agents/deploy-your-agents): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Agent Cloud (Managed)](https://docs.videosdk.live/ai_agents/deployments/agent-cloud): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Agents deployments](https://docs.videosdk.live/ai_agents/deployments/introduction): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Dispatch Agents](https://docs.videosdk.live/ai_agents/deployments/self-hosting/dispatch-agents): Dynamically dispatch AI agents to meetings using the VideoSDK API. - [AWS EC2 Deployment](https://docs.videosdk.live/ai_agents/deployments/self-hosting/hosting-environments/aws-ec2): Deploy your VideoSDK AI Agent on AWS EC2 with minimal setup. - [Docker Deployment](https://docs.videosdk.live/ai_agents/deployments/self-hosting/hosting-environments/docker): Deploy your VideoSDK AI Agent using Docker containers. - [Kubernetes Deployment](https://docs.videosdk.live/ai_agents/deployments/self-hosting/hosting-environments/kubernetes): Deploy your VideoSDK AI Agent on Kubernetes clusters. - [Monitoring APIs](https://docs.videosdk.live/ai_agents/deployments/self-hosting/monitoring-apis): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Understanding the Worker](https://docs.videosdk.live/ai_agents/deployments/self-hosting/understanding-worker): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Worker Configuration](https://docs.videosdk.live/ai_agents/deployments/self-hosting/worker-configuration): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [Function Tools](https://docs.videosdk.live/ai_agents/function-tools): Learn how to extend your VideoSDK AI Agent's capabilities with function tools. Create custom actions, integrate with external services, and enable your agent to perform tasks beyond conversation using the @function_tool decorator. - [Human in the Loop](https://docs.videosdk.live/ai_agents/human-in-the-loop): Learn how to implement Human in the Loop (HITL) functionality with VideoSDK AI Agents using Discord integration for human oversight and intervention. - [Introduction](https://docs.videosdk.live/ai_agents/introduction): Introduce yourself to the VideoSDK AI Agent SDK, a Python framework for integrating AI-powered voice agents into VideoSDK meetings. Understand its high-level architecture and how it bridges AI models with users for real-time interactions. - [MCP Integration](https://docs.videosdk.live/ai_agents/mcp-integration): Learn how to integrate Model Context Protocol (MCP) servers with VideoSDK AI Agents to extend your agent's capabilities with external services, databases, and APIs using STDIO and HTTP transport methods. - [Simli Avatar](https://docs.videosdk.live/ai_agents/plugins/avatar/simli): Learn how to use Simli's real-time AI avatars with the VideoSDK AI Agent SDK. This guide covers configuration, API integration, and adding a visual avatar to your agent. - [RNNoise Denoise](https://docs.videosdk.live/ai_agents/plugins/denoise): Learn how to use RNNoise with the VideoSDK AI Agent SDK. This guide covers how to denoise your audio input - [Anthropic LLM](https://docs.videosdk.live/ai_agents/plugins/llm/anthropic): Learn how to use Anthropic's LLM models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [Azure OpenAI LLM](https://docs.videosdk.live/ai_agents/plugins/llm/azure-openai): Learn how to use Azure OpenAI's LLM models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [Cerebras LLM](https://docs.videosdk.live/ai_agents/plugins/llm/cerebras): Learn how to use Cerebras's LLM models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [Google LLM](https://docs.videosdk.live/ai_agents/plugins/llm/google): Learn how to use Google's LLM models (Gemini) with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [OpenAI LLM](https://docs.videosdk.live/ai_agents/plugins/llm/openai): Learn how to use OpenAI's LLM models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [Sarvam AI LLM](https://docs.videosdk.live/ai_agents/plugins/llm/sarvamai): Learn how to use Sarvam AI's LLM models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text-based AI capabilities for your conversational agents. - [Namo Turn Detector](https://docs.videosdk.live/ai_agents/plugins/namo-turn-detector): Learn how to use NamoTurnDetectorV1 model with the VideoSDK AI Agent SDK. This guide covers model configuration. - [AWS Nova Sonic](https://docs.videosdk.live/ai_agents/plugins/realtime/aws-nova-sonic): Learn how to use Amazon's Nova Sonic model with the VideoSDK AI Agent SDK. This guide covers model configuration, streaming audio, and integration with your agent pipeline. - [Azure Voice Live API](https://docs.videosdk.live/ai_agents/plugins/realtime/azure): Learn how to use Azure's Voice Live API with the VideoSDK AI Agent SDK. This guide covers model configuration, real-time speech interactions, and integration with your agent pipeline. - [Google Gemini (LiveAPI)](https://docs.videosdk.live/ai_agents/plugins/realtime/google): Learn how to use Google's Gemini models with the VideoSDK AI Agent SDK. This guide covers model configuration, streaming audio, and integration with your agent pipeline. - [OpenAI](https://docs.videosdk.live/ai_agents/plugins/realtime/openai): Learn how to use OpenAI's real-time models with the VideoSDK AI Agent SDK. This guide covers model configuration, streaming audio, and integration with your agent pipeline. - [Ultravox](https://docs.videosdk.live/ai_agents/plugins/realtime/ultravox): Learn how to use Ultravox's real-time AI models with the VideoSDK AI Agent SDK. This guide covers model configuration, function calling, MCP integration, and connecting to your agent pipeline. - [xAI (Grok)](https://docs.videosdk.live/ai_agents/plugins/realtime/xai-grok): Learn how to use xAI's Grok models with the VideoSDK AI Agent SDK. This guide covers model configuration, real-time speech interactions, and integration with your agent pipeline. - [Silero VAD](https://docs.videosdk.live/ai_agents/plugins/silero-vad): Learn how to use Silero's VAD with the VideoSDK AI Agent SDK. This guide covers model configuration, related events. - [AssemblyAI STT](https://docs.videosdk.live/ai_agents/plugins/stt/assemblyai): Learn how to use AssemblyAI's real-time speech-to-text models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing streaming transcription. - [Azure STT](https://docs.videosdk.live/ai_agents/plugins/stt/azure): Learn how to use Azure's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Azure's services - [Azure OpenAI STT](https://docs.videosdk.live/ai_agents/plugins/stt/azure-openai): Learn how to use Azure OpenAI's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Azure OpenAI's services - [Cartesia STT](https://docs.videosdk.live/ai_agents/plugins/stt/cartesia): Learn how to use Cartesia's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Cartesia's services - [Deepgram STT](https://docs.videosdk.live/ai_agents/plugins/stt/deepgram): Learn how to use Deepgram's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Deepgram's services - [ElevenLabs STT](https://docs.videosdk.live/ai_agents/plugins/stt/eleven-labs): Learn how to use ElevenLabs's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for ElevenLabs's services - [Gladia STT](https://docs.videosdk.live/ai_agents/plugins/stt/gladia): Learn how to use Gladia's real-time speech-to-text models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing streaming transcription. - [Google STT](https://docs.videosdk.live/ai_agents/plugins/stt/google): Learn how to use Google's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Google's services - [Navana STT](https://docs.videosdk.live/ai_agents/plugins/stt/navana): Learn how to use Navana's Bodhi STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech-to-text, with a focus on Indian languages. - [Nvidia STT](https://docs.videosdk.live/ai_agents/plugins/stt/nvidia): Learn how to use Nvidia's Riva STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Nvidia's services - [OpenAI STT](https://docs.videosdk.live/ai_agents/plugins/stt/openai): Learn how to use OpenAI's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for OpenAI's services - [Sarvam AI STT](https://docs.videosdk.live/ai_agents/plugins/stt/sarvamai): Learn how to use Sarvam AI's STT models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing speech to text for Sarvam AI's services - [AWS Polly TTS](https://docs.videosdk.live/ai_agents/plugins/tts/aws): Learn how to use AWS Polly's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for AWS's services - [Azure TTS](https://docs.videosdk.live/ai_agents/plugins/tts/azure): Learn how to use Azure's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Azure's services - [Azure OpenAI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/azure-openai): Learn how to use Azure OpenAI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Azure OpenAI's services - [Cartesia TTS](https://docs.videosdk.live/ai_agents/plugins/tts/cartesia): Learn how to use Cartesia's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Cartesia's services - [Deepgram TTS](https://docs.videosdk.live/ai_agents/plugins/tts/deepgram): Learn how to use Deepgram's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Deepgram's services - [ElevenLabs TTS](https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs): Learn how to use ElevenLabs's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for ElevenLabs's services - [Google TTS](https://docs.videosdk.live/ai_agents/plugins/tts/google): Learn how to use Google's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Google's services - [Groq TTS](https://docs.videosdk.live/ai_agents/plugins/tts/groq): Learn how to use Groq's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Groq's services - [Hume AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/humeai): Learn how to use Hume AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Hume AI's services - [Inworld AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/inworldai): Learn how to use Inworld AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Inworld AI's services - [LMNT AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/lmnt): Learn how to use LMNT AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for LMNT AI's services - [Murf AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/murfai): Learn how to use Murf AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Murf AI's services - [Neuphonic TTS](https://docs.videosdk.live/ai_agents/plugins/tts/neuphonic): Learn how to use Neuphonic's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Neuphonic's services - [Nvidia TTS](https://docs.videosdk.live/ai_agents/plugins/tts/nvidia): Learn how to use Nvidia's Riva TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Nvidia's services - [OpenAI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/openai): Learn how to use OpenAI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for OpenAI's services - [Papla Media TTS](https://docs.videosdk.live/ai_agents/plugins/tts/papla): Learn how to use Papla Media's text-to-speech service with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing TTS for Papla Media. - [Resemble AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/resemble): Learn how to use Resemble AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Resemble AI's services - [Rime AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/rime): Learn how to use Rime AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Rime AI's services - [Sarvam AI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/sarvamai): Learn how to use Sarvam AI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Sarvam AI's services - [SmallestAI TTS](https://docs.videosdk.live/ai_agents/plugins/tts/smallestai): Learn how to use SmallestAI's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for SmallestAI's services - [Speechify TTS](https://docs.videosdk.live/ai_agents/plugins/tts/speechify): Learn how to use Speechify's TTS models with the VideoSDK AI Agent SDK. This guide covers model configuration, API integration, and implementing text to speech for Speechify's services - [Turn Detector](https://docs.videosdk.live/ai_agents/plugins/turn-detector): Learn how to use TurnDetector model with the VideoSDK AI Agent SDK. This guide covers model configuration. - [Recording](https://docs.videosdk.live/ai_agents/recording): Learn how to enable the recording functionality with VideoSDK AI Agents for agent sessions and user interactions. - [Running Agents with Worker](https://docs.videosdk.live/ai_agents/running-multiple-agents): Learn how to run AI agent instances using the Worker system in the VideoSDK AI Agent SDK. Understand WorkerJob and JobContext for robust agent deployment with proper process isolation and lifecycle management. - [Session Analytics](https://docs.videosdk.live/ai_agents/tracing-observability/session-analytics): Understand how to use Tracing & Observability for the AI Agent SDK on the VideoSDK Dashboard to inspect sessions, transcripts, and end‑to‑end latency per component. - [Trace Insights](https://docs.videosdk.live/ai_agents/tracing-observability/traces): The real power of VideoSDK's Tracing and Observability tools lies in the detailed session and trace views. These views provide a granular breakdown of each conversation, allowing you to analyze every turn, inspect component latencies, and understand the agent's decision-making process. - [Vision](https://docs.videosdk.live/ai_agents/vision): Learn how to use vision context in realtime and cascading pipeline. - [Wake Up Call](https://docs.videosdk.live/ai_agents/wakeup-call): Learn how to implement Wake Up Call functionality with VideoSDK AI Agents to automatically trigger actions when users are inactive for a specified duration. - [WhatsApp Agent Quick Start](https://docs.videosdk.live/ai_agents/whatsapp-voice-agent-quick-start): A comprehensive guide to creating a powerful AI voice agent that can answer calls made to your WhatsApp Business number. Learn how to integrate with Meta Business Platform using direct SIP integration and VideoSDK. - [Custom Tracks](https://docs.videosdk.live/android/api/sdk-reference/custom-tracks): Custom Video Track features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Meeting Error Codes](https://docs.videosdk.live/android/api/sdk-reference/error-codes): If you encounter any of the errors listed below, refer to the Developer Experience Guide, which offers recommended solutions based on common error categories. - [Initializing a Meeting](https://docs.videosdk.live/android/api/sdk-reference/initMeeting): initialize() - [MediaEffects library](https://docs.videosdk.live/android/api/sdk-reference/mediaeffects-library): The MediaEffects library enhances video applications by providing advanced media effects, including virtual backgrounds. - [Meeting class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/meeting-class): RTC Meeting Class provides features to implement custom meeting layout in your application. - [Video SDK Meeting Class](https://docs.videosdk.live/android/api/sdk-reference/meeting-class/introduction): Introduction - [MeetingEventListener Class](https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class): --- - [Meeting Class Methods](https://docs.videosdk.live/android/api/sdk-reference/meeting-class/methods): join() - [Meeting Class Properties](https://docs.videosdk.live/android/api/sdk-reference/meeting-class/properties): getmeetingId() - [MeetingEventListener Class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/meeting-event-listener-class): The `MeetingEventListener Class` includes list of events which can be useful for the design custom user interface. - [Participant class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/participant-class): The `Participant Class` includes methods and events for participants and their associated video & audio streams, data channels and UI customization. - [Video SDK Participant Class](https://docs.videosdk.live/android/api/sdk-reference/participant-class/introduction): Participant class includes all the properties, methods and events related to all the participants joined in a particular meeting. - [Participant Class Methods](https://docs.videosdk.live/android/api/sdk-reference/participant-class/methods): enableWebcam() - [ParticipantEventListener Class](https://docs.videosdk.live/android/api/sdk-reference/participant-class/participant-event-listener-class): Implementation - [Participant Class Properties](https://docs.videosdk.live/android/api/sdk-reference/participant-class/properties): getId() - [ParticipantEventListener Class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/participant-event-listener-class): The `ParticipantEventListener Class` includes list of events which can be useful for the design custom user interface. - [PubSub class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class): PubSub Class - [Video SDK PubSub Class](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class/introduction): Introduction - [PubSub Class Methods](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class/methods): publish() - [Properties](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class/pubsub-message-class): getId() - [PubSubMessageListener Class](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class/pubsub-message-listener-class): --- - [PubSubPublishOptions Class](https://docs.videosdk.live/android/api/sdk-reference/pubsub-class/pubsub-publish-options-class): Properties - [PubSubMessage class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/pubsub-message-class): PubSubMessage Class - [PubSubPublishOptions class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/pubsub-publish-options-class): PubSubPublishOptions Class - [RealtimeStore](https://docs.videosdk.live/android/api/sdk-reference/realtime-store/introduction): The RealtimeStore class allows you to store, update, retrieve, and observe custom key-value data within a meeting in real time. - [RealtimeStore Methods](https://docs.videosdk.live/android/api/sdk-reference/realtime-store/methods): set() - [RealtimeStoreCallback Class](https://docs.videosdk.live/android/api/sdk-reference/realtime-store/realtimestore-callback-class): A callback interface used for asynchronous operations in RealtimeStore. - [RealtimeStoreListener Class](https://docs.videosdk.live/android/api/sdk-reference/realtime-store/realtimestore-listener-class): --- - [Installation steps for RTC Android SDK](https://docs.videosdk.live/android/api/sdk-reference/setup): RTC Android SDK provides client for almost all Android devices. it takes less amount of cpu and memory. - [Stream class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/stream-class): RTC Stream Class enables opportunity to . - [Video SDK Stream Class](https://docs.videosdk.live/android/api/sdk-reference/stream-class/introduction): Stream class is responsible for handling audio, video and screen sharing streams. - [Stream Class Methods](https://docs.videosdk.live/android/api/sdk-reference/stream-class/methods): resume() - [Stream Class Properties](https://docs.videosdk.live/android/api/sdk-reference/stream-class/properties): getId() - [Terminology - Video SDK Documentation](https://docs.videosdk.live/android/api/sdk-reference/terminology): Video SDK enables the opportunity to integrate native IOS, Android & Web SDKs to add live video & audio conferencing to your applications. - [Video SDK Class for android SDK.](https://docs.videosdk.live/android/api/sdk-reference/video-sdk-class): Video SDK Class is a factory for initialize, configure and init meetings. - [VideoSDK Class Events](https://docs.videosdk.live/android/api/sdk-reference/videosdk-class/events): onAudioDeviceChanged() - [VideoSDK Class](https://docs.videosdk.live/android/api/sdk-reference/videosdk-class/introduction): Introduction - [VideoSDK Class Methods](https://docs.videosdk.live/android/api/sdk-reference/videosdk-class/methods): initialize() - [VideoSDK Class Properties](https://docs.videosdk.live/android/api/sdk-reference/videosdk-class/properties): getSelectedAudioDevice() - [App Size Optimization - Android](https://docs.videosdk.live/android/guide/best-practices/app-optimisation): This guide is designed to help developers optimize app size, enhancing performance and efficiency across different devices. By following these best practices, you can reduce load times, minimize storage requirements, and deliver a more seamless experience to users, all while preserving essential functionality. - [Developer Experience Guidelines - Android](https://docs.videosdk.live/android/guide/best-practices/developer-experience): This section provides best practices for creating a smooth and efficient development process when working with VideoSDK. From handling errors gracefully to managing resources and event subscriptions, these guidelines help developers build more reliable and maintainable applications. Following these practices can simplify troubleshooting, prevent common pitfalls, and improve overall application performance. - [Handle Large Rooms - Android](https://docs.videosdk.live/android/guide/best-practices/handle-large-rooms): Managing large meetings requires specific strategies to ensure performance, stability, and a seamless user experience. This section provides best practices for optimizing VideoSDK applications to handle high participant volumes effectively. By implementing these recommendations, you can reduce lag, maintain video and audio quality, and provide a smooth experience even in large rooms. - [User Experience Guidelines - Android](https://docs.videosdk.live/android/guide/best-practices/user-experience): This guide aims to help developers optimize the user experience and functionality of video conferencing applications with VideoSDK. By following these best practices, you can create smoother interactions, minimize common issues, and deliver a more reliable experience for users. - [Face Match API](https://docs.videosdk.live/android/guide/identity-verification/face-match-api): - [Face Spoof Detection](https://docs.videosdk.live/android/guide/identity-verification/face-spoof-detection): - [Number of Face Detection](https://docs.videosdk.live/android/guide/identity-verification/number-of-face): - [OCR API](https://docs.videosdk.live/android/guide/identity-verification/ocr): - [Understanding Analytics Dashboard | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/analytics/understanding-analytics-dashboard): Learn how to access and use VideoSDK's Analytics Dashboard to optimize session performance and diagnose issues. - [Understanding Call Quality | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/analytics/understanding-call-quality): Learn how factors like bandwidth, latency, and device quality impact your app's call quality with Video SDK. - [Change Mode - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/audience-management/change-mode): In a live stream, audience members usually join in RECVONLY mode, meaning they can only view and listen to the hosts. However, if a host invites an audience member to actively participate (e.g., to speak or present), the audience member can switch their mode to SENDAND_RECV using the changeMode() method. - [Remove participant from the meeting - React JS SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/audience-management/remove-participant): Remove a participant or a peer from the meeting while it is still in progress. It helps in meeting moderation. - [Audience Polls during Live Stream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/audience-polling): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Authentication and Token | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/authentication-and-token): Video SDK and Audio SDK, developers need to implement a token server. This requires efforts on both the front-end and backend. - [Developer Experience Guidelines - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/best-practices/developer-experience): - [Handle Large Rooms - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/best-practices/handle-large-rooms): - [User Experience Guidelines - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/best-practices/user-experience): - [Chat during Live Stream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/chat): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Cloud Proxy | Secure and Manage Streaming Traffic | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/cloud-proxy): Leverage Video SDK's Cloud Proxy to securely manage and optimize your video streaming traffic. Ideal for enhancing performance and securing data. - [Concept and Architecture - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/concept-and-architecture): Before diving into the concept, let's understand the VideoSDK, VideoSDK is a software development kit that offers tools and APIs for creating apps that are based on video and audio. It typically includes features such as video and audio calls, chat, cloud recording, simulcasting (RTMP), interactive live streaming (HLS), and many more across a wide range of platforms and devices. - [Custom Audio Sources - React JS SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/custom-media-sources/custom-audio-sources): Custom Audio Sources - [Custom ScreenShare Sources - React JS SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/custom-media-sources/custom-screenshare-sources): Custom ScreenShare Sources - [Custom Video Sources - React JS SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/custom-media-sources/custom-video-sources): Custom Video Sources - [Customized Live Stream](https://docs.videosdk.live/android/guide/interactive-live-streaming/custom-template): VideoSDK is a platform that offers a range of video streaming tools and solutions for content creators, publishers, and developers. - [Manage Audio and Video Devices - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/device-management/change-devices): This feature allows hosts to switch their microphone, speaker, or camera devices during a live stream. Only hosts (in SENDANDRECV mode) can change input/output devices, ensuring they maintain control over their audio and video quality, while audience members (in RECV_ONLY mode) continue to receive the broadcast seamlessly. - [Mute / Unmute Mic | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/device-management/mute-unmute-mic): Mic Controls features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [On / Off Camera | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/device-management/on-off-camera): Camera Controls features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Screen Share | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/device-management/screenshare): Share your Screen features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Switch Live Stream | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/fast-channel-switching): Overview - [Geo Fencing | Location based Restrictions | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/geo-fencing): Implement geographic content restrictions with Video SDK's Geo Fencing feature. Essential for compliance and tailored content delivery based on user location. - [Host vs Audience Modes | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/getting-started-with-ils/host-vs-audience): Understand the differences between Host and Audience modes in Video SDK's Interactive Live Stream (ILS), their capabilities, and how participants transition between them. - [Overview | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/getting-started-with-ils/overview): Learn how Video SDK's Interactive Live Streaming (ILS) enables ultra-low-latency real-time engagement, supporting multiple hosts and thousands of viewers with interactive tools. - [Display Attendees Count - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/handling-participants/display-attendees-count): Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Invite Guest on Stage - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/handling-participants/invite-guest-on-stage): Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Manage On-Screen Participants - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/handling-participants/manage-on-screen-participants): Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Manage Roles - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/handling-participants/manage-roles): Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Interactive Livestream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/integrate-hls/overview): Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Setup HLS Player | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/integrate-hls/setup-hls-player): Using VideoSDK to do the interactive livestreaming. - [Start HLS | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/integrate-hls/start-hls): Using VideoSDK to do the interactive livestreaming. - [Stop HLS | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/integrate-hls/stop-hls): Using VideoSDK to do the interactive livestreaming. - [Integrating Android SDK - Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/integrate-sdk): Create customizable real-time video & audio calling applications with Android SDK with Video SDK add live Video & Audio conferencing to your applications. - [Chat messages with PubSub - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/interaction-in-livestream/chat): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Notify Attendees with PubSub - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/interaction-in-livestream/notify-attendees): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Raise Hand with PubSub - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/interaction-in-livestream/raise-hand): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Reactions with PubSub - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/interaction-in-livestream/reactions): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Introduction](https://docs.videosdk.live/android/guide/interactive-live-streaming/introduction): Welcome to the Android Interactive Live Streaming (ILS). Build real-time, low-latency experiences where hosts and audiences connect, interact, and engage seamlessly using VideoSDK. - [Known Issues - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/known-issues): - [Pin/Unpin participant from the meeting - React JS SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/layout-management/pin-unpin): Pin/Unpin a participant or a peer from the meeting while it is still in progress. It helps in meeting moderation. - [Rendering Host and Audience Views | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/layout-management/rendering-views): Learn how to render host and audience views during a live stream using Video SDK's React API, manage participant media, and track participant counts in real time. - [Live Captioning - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/live-captioning): Live Captioning features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add support for Live Captioning or captioning to your applications. - [Reactions during Live Stream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/reactions): PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Geo Tag Meeting recordings - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/geo-tagging): Geo Tag meeting recordings feature quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Recording Overview - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/overview): Record Meeting features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Generate presigned URL - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/presigned-url): presigned URL features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Record Live Stream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/record-meeting): Record Live Stream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Participant Recording (Individual) - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/record-participant): Individual participant recording features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Participant Track Recording - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/record-track): Participant track recording features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Configure Recording Storage - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/recording-storage-config): Storage Configuration - [Record Meeting Video & Audio Call - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/recording/webhook-and-events): Record Meeting features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Relay Media | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/relay-media): Overview - [Release Notes - Android](https://docs.videosdk.live/android/guide/interactive-live-streaming/release-notes): - [RTMP Livestream - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/rtmp-livestream): RTMP Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications. - [Sending Virtual Gifts - Video SDK Docs](https://docs.videosdk.live/android/guide/interactive-live-streaming/sending-virtual-gifts): Learn how to implement a complete virtual gifting system in your live streaming app using PubSub and your backend server for balance verification and event broadcasting. - [Initialize Live Stream | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/setup-livestream/initialise-stream): Learn how to initialize a live stream using VideoSDK in React. This guide covers generating tokens, creating stream IDs, and setting up a live stream session with VideoSDK. - [Join/Leave Live Stream | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/setup-livestream/join-leave-stream): Easily integrate join and leave functionalities for interactive live streams using Video SDK’s ReactSDK with event handling for participant management. - [Waiting Screen | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/setup-livestream/waiting-screen): Learn how to implement a waiting screen in React with Video SDK to ensure audience members only enter a live stream after a host joins, enhancing the meeting experience. - [Supported Device OS And Architecture](https://docs.videosdk.live/android/guide/interactive-live-streaming/supported-device-os-architecture): Build customizable real-time video & audio calling applications in Android SDK using Video SDK add live Video & Audio conferencing to your applications. - [Virtual Background](https://docs.videosdk.live/android/guide/interactive-live-streaming/virtual-background): Virtual backgrounds enhance your live stream experience by allowing you to replace your physical background with a digital image or apply a blur effect. This is ideal for creators and streamers who want to maintain privacy, reduce distractions, or add a personal touch to their broadcast. You can use a preloaded background image or supply your own via a CDN. - [WHIP/WHEP | Video SDK](https://docs.videosdk.live/android/guide/interactive-live-streaming/whip-whep-ils): - [Overview of SIP with VideoSDK - Android](https://docs.videosdk.live/android/guide/sip-connect/overview):