GeminiLLM#
- class council.llm.GeminiLLM(config: GeminiLLMConfiguration)[source]#
Bases:
LLMBase
[GeminiLLMConfiguration
]- __init__(config: GeminiLLMConfiguration) None [source]#
Initialize a new instance.
- Parameters:
config (GeminiLLMConfiguration) – configuration for the instance
- static from_env() GeminiLLM [source]#
Helper function that create a new instance by getting the configuration from environment variables.
- Returns:
GeminiLLM
- post_chat_request(context: LLMContext, messages: Sequence[LLMMessage], **kwargs: Any) LLMResult #
Sends a chat request to the language model.
- Parameters:
context (LLMContext) – a context to track execution metrics
messages (Sequence[LLMMessage]) – A list of LLMMessage objects representing the chat messages.
**kwargs – Additional keyword arguments for the chat request.
- Returns:
The response from the language model.
- Return type:
- Raises:
LLMTokenLimitException – If messages exceed the maximum number of tokens.
Exception – If an error occurs during the execution of the chat request.
- render_as_dict(include_children: bool = True) Dict[str, Any] #
returns the graph of operation as a dictionary
- render_as_json() str #
returns the graph of operation as a JSON string