LLMFallback#

class council.llm.LLMFallback(llm: LLMBase, fallback: LLMBase, retry_before_fallback: int = 2)[source]#

Bases: LLMBase[LLMFallbackConfiguration]

A class that combines two language models, using a fallback mechanism upon failure.

_llm#

The primary language model instance.

Type:

LLMBase

_fallback#

The fallback language model instance.

Type:

LLMBase

_retry_before_fallback#

The number of retry attempts with the primary language model before switching to the fallback.

Type:

int

post_chat_request(context: LLMContext, messages: Sequence[LLMMessage], **kwargs: Any) LLMResult#

Sends a chat request to the language model.

Parameters:
  • context (LLMContext) – a context to track execution metrics

  • messages (Sequence[LLMMessage]) – A list of LLMMessage objects representing the chat messages.

  • **kwargs – Additional keyword arguments for the chat request.

Returns:

The response from the language model.

Return type:

LLMResult

Raises:
  • LLMTokenLimitException – If messages exceed the maximum number of tokens.

  • Exception – If an error occurs during the execution of the chat request.

render_as_dict(include_children: bool = True) Dict[str, Any]#

returns the graph of operation as a dictionary

render_as_json() str#

returns the graph of operation as a JSON string