LLMBase#
- class council.llm.LLMBase(configuration: T_Configuration, token_counter: LLMMessageTokenCounterBase | None = None, name: str | None = None)[source]#
Bases:
Generic
[T_Configuration
],Monitorable
,ABC
Abstract base class representing a language model.
- post_chat_request(context: LLMContext, messages: Sequence[LLMMessage], **kwargs: Any) LLMResult [source]#
Sends a chat request to the language model.
- Parameters:
context (LLMContext) – a context to track execution metrics
messages (Sequence[LLMMessage]) – A list of LLMMessage objects representing the chat messages.
**kwargs – Additional keyword arguments for the chat request.
- Returns:
The response from the language model.
- Return type:
- Raises:
LLMTokenLimitException – If messages exceed the maximum number of tokens.
Exception – If an error occurs during the execution of the chat request.
- render_as_dict(include_children: bool = True) Dict[str, Any] #
returns the graph of operation as a dictionary
- render_as_json() str #
returns the graph of operation as a JSON string
LLMResult#
- class council.llm.LLMResult(choices: Sequence[str], consumptions: Sequence[Consumption] | None = None, raw_response: Dict[str, Any] | None = None)[source]#
Bases:
object
Represents a response from the LLM
- property first_choice: str#
First choice, e.g. LLM response.
- property choices: Sequence[str]#
List of LLM responses.
- property consumptions: Sequence[Consumption]#
List of consumptions associated with LLM call.
- property raw_response: Dict[str, Any]#
Raw response from LLM provider API.