LLMSkill#
- class council.skills.LLMSkill(llm: ~council.llm.llm_base.LLMBase, name: str = 'LLMSkill', system_prompt: str = '', context_messages: ~council.skills.llm_skill.ReturnMessages = <function get_last_messages>)[source]#
Bases:
SkillBase
Skill to interact with an LLM.
- __init__(llm: ~council.llm.llm_base.LLMBase, name: str = 'LLMSkill', system_prompt: str = '', context_messages: ~council.skills.llm_skill.ReturnMessages = <function get_last_messages>) None [source]#
Initialize a new instance of LLMSkill.
- Parameters:
llm (LLMBase) – The instance of the LLM (Language Model) to interact with.
system_prompt (str) – Optional system prompt to provide to the language model.
context_messages (Callable[[SkillContext], List[LLMMessage]]) – Optional callable to retrieve context messages.
- Returns:
None
- build_success_message(message: str, data: Any | None = None) ChatMessage #
Builds a success message for the skill with the provided message and optional data.
- Parameters:
message (str) – The success message.
data (Any, optional) – Additional data to include in the message. Defaults to None.
- Returns:
The success message.
- Return type:
- Raises:
None –
- execute(context: SkillContext) ChatMessage [source]#
Execute LLMSkill.
- execute_skill(context: SkillContext) ChatMessage #
Skill execution
- property name#
Property getter for the skill name.
- Returns:
The name of the skill.
- Return type:
str
- Raises:
None –
- render_as_dict(include_children: bool = True) Dict[str, Any] #
returns the graph of operation as a dictionary
- render_as_json() str #
returns the graph of operation as a JSON string
- run_in_current_thread(context: ChainContext, iteration_context: Option[IterationContext]) None #
Run the skill in the current thread
- run_skill(context: ChainContext, executor: ThreadPoolExecutor) None #
Run the skill in a different thread, and await for completion