LLMFunctionWithPrompt#
- class council.llm.LLMFunctionWithPrompt(llm: LLMBase | LLMMiddlewareChain, response_parser: Callable[[LLMResponse], T_Response], prompt_config: LLMPromptConfigObject, max_retries: int = 3, system_prompt_params: Mapping[str, str] | None = None, system_prompt_caching: bool = False)[source]#
Bases:
LLMFunction
[T_Response
]Represents an LLMFunction created with LLMPrompt
- __init__(llm: LLMBase | LLMMiddlewareChain, response_parser: Callable[[LLMResponse], T_Response], prompt_config: LLMPromptConfigObject, max_retries: int = 3, system_prompt_params: Mapping[str, str] | None = None, system_prompt_caching: bool = False) None [source]#
Initializes the LLMFunctionWithPrompt with an ability to format and cache system prompt.
- Parameters:
system_prompt_params (Optional[Mapping[str, str]]) – system prompt params to format system prompt
system_prompt_caching (bool) – whether to cache system prompt (default: False). Only Anthropic prompt caching is supported. Note: entire system prompt should be static
- execute(user_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, user_prompt_params: Mapping[str, str] | None = None, **kwargs: Any) T_Response [source]#
Execute LLMFunctionWithPrompt with an ability to format user prompt.
- execute_with_llm_response(user_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, user_prompt_params: Mapping[str, str] | None = None, **kwargs: Any) LLMFunctionResponse[T_Response] [source]#
Execute LLMFunctionWithPrompt with an ability to format user prompt.