Module agents.llm.llm
Classes
class LLM-
Expand source code
class LLM(EventEmitter[Literal["error"]]): """ Base class for LLM implementations. """ def __init__(self) -> None: """ Initialize the LLM base class. """ super().__init__() self._label = f"{type(self).__module__}.{type(self).__name__}" @property def label(self) -> str: """ Get the LLM provider label. Returns: str: A string identifier for the LLM provider (e.g., "videosdk.plugins.openai.llm.OpenAILLM"). """ return self._label @abstractmethod async def chat( self, messages: ChatContext, tools: list[FunctionTool] | None = None, **kwargs: Any ) -> AsyncIterator[LLMResponse]: """ Main method to interact with the LLM. Args: messages (ChatContext): The conversation context containing message history. tools (list[FunctionTool] | None, optional): List of available function tools for the LLM to use. **kwargs (Any): Additional arguments specific to the LLM provider implementation. Returns: AsyncIterator[LLMResponse]: An async iterator yielding LLMResponse objects as they're generated. Raises: NotImplementedError: This method must be implemented by subclasses. """ raise NotImplementedError @abstractmethod async def cancel_current_generation(self) -> None: """ Cancel the current LLM generation if active. Raises: NotImplementedError: This method must be implemented by subclasses. """ # override in subclasses pass async def aclose(self) -> None: """ Cleanup resources. """ logger.info(f"Cleaning up LLM: {self.label}") await self.cancel_current_generation() try: import gc gc.collect() logger.info(f"LLM garbage collection completed: {self.label}") except Exception as e: logger.error(f"Error during LLM garbage collection: {e}") logger.info(f"LLM cleanup completed: {self.label}") async def __aenter__(self) -> LLM: """ Async context manager entry point. """ return self async def __aexit__(self, exc_type, exc_val, exc_tb) -> None: """ Async context manager exit point. """ await self.aclose()Base class for LLM implementations.
Initialize the LLM base class.
Ancestors
- EventEmitter
- typing.Generic
Instance variables
prop label : str-
Expand source code
@property def label(self) -> str: """ Get the LLM provider label. Returns: str: A string identifier for the LLM provider (e.g., "videosdk.plugins.openai.llm.OpenAILLM"). """ return self._labelGet the LLM provider label.
Returns
str- A string identifier for the LLM provider (e.g., "videosdk.plugins.openai.llm.OpenAILLM").
Methods
async def aclose(self) ‑> None-
Expand source code
async def aclose(self) -> None: """ Cleanup resources. """ logger.info(f"Cleaning up LLM: {self.label}") await self.cancel_current_generation() try: import gc gc.collect() logger.info(f"LLM garbage collection completed: {self.label}") except Exception as e: logger.error(f"Error during LLM garbage collection: {e}") logger.info(f"LLM cleanup completed: {self.label}")Cleanup resources.
async def cancel_current_generation(self) ‑> None-
Expand source code
@abstractmethod async def cancel_current_generation(self) -> None: """ Cancel the current LLM generation if active. Raises: NotImplementedError: This method must be implemented by subclasses. """ # override in subclasses passCancel the current LLM generation if active.
Raises
NotImplementedError- This method must be implemented by subclasses.
async def chat(self,
messages: ChatContext,
tools: list[FunctionTool] | None = None,
**kwargs: Any) ‑> AsyncIterator[LLMResponse]-
Expand source code
@abstractmethod async def chat( self, messages: ChatContext, tools: list[FunctionTool] | None = None, **kwargs: Any ) -> AsyncIterator[LLMResponse]: """ Main method to interact with the LLM. Args: messages (ChatContext): The conversation context containing message history. tools (list[FunctionTool] | None, optional): List of available function tools for the LLM to use. **kwargs (Any): Additional arguments specific to the LLM provider implementation. Returns: AsyncIterator[LLMResponse]: An async iterator yielding LLMResponse objects as they're generated. Raises: NotImplementedError: This method must be implemented by subclasses. """ raise NotImplementedErrorMain method to interact with the LLM.
Args
messages:ChatContext- The conversation context containing message history.
tools:list[FunctionTool] | None, optional- List of available function tools for the LLM to use.
**kwargs:Any- Additional arguments specific to the LLM provider implementation.
Returns
AsyncIterator[LLMResponse]- An async iterator yielding LLMResponse objects as they're generated.
Raises
NotImplementedError- This method must be implemented by subclasses.
class LLMResponse (**data: Any)-
Expand source code
class LLMResponse(BaseModel): """ Data model to hold LLM response data. Attributes: content (str): The text content generated by the LLM. role (ChatRole): The role of the response (typically ASSISTANT). metadata (Optional[dict[str, Any]]): Additional response metadata from the LLM provider. """ content: str role: ChatRole metadata: Optional[dict[str, Any]] = NoneData model to hold LLM response data.
Attributes
content:str- The text content generated by the LLM.
role:ChatRole- The role of the response (typically ASSISTANT).
metadata:Optional[dict[str, Any]]- Additional response metadata from the LLM provider.
Create a new model by parsing and validating input data from keyword arguments.
Raises [
ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.selfis explicitly positional-only to allowselfas a field name.Ancestors
- pydantic.main.BaseModel
Class variables
var content : strvar metadata : dict[str, typing.Any] | Nonevar model_configvar role : ChatRole