LLMController#

classDiagram ControllerBase <|-- LLMController Monitorable <|-- ControllerBase
class council.controllers.LLMController(chains: Sequence[ChainBase], llm: LLMBase, response_threshold: float = 0.0, top_k: int | None = None, parallelism: bool = False)[source]#

Bases: ControllerBase

A controller that uses an LLM to decide the execution plan

__init__(chains: Sequence[ChainBase], llm: LLMBase, response_threshold: float = 0.0, top_k: int | None = None, parallelism: bool = False)[source]#

Initialize a new instance of an LLMController

Parameters:
  • llm (LLMBase) – the instance of LLM to use

  • response_threshold (float) – a minimum threshold to select a response from its score

  • top_k (int) – maximum number of execution plan returned

  • parallelism (bool) – If true, Build a plan that will be executed in parallel

property chains: Sequence[ChainBase]#

the chains of the controller

execute(context: AgentContext) List[ExecutionUnit]#

Generates an execution plan for the agent based on the provided context, chains, and budget.

Parameters:

context (AgentContext) – The context for generating the execution plan.

Returns:

A list of execution units representing the execution plan.

Return type:

List[ExecutionUnit]

Raises:

None

render_as_dict(include_children: bool = True) Dict[str, Any]#

returns the graph of operation as a dictionary

render_as_json() str#

returns the graph of operation as a JSON string