LLMFunctionResponse#

class council.llm.LLMFunctionResponse(llm_response: LLMResponse, response: T_Response, previous_responses: Sequence[LLMResponse])[source]#

Bases: Generic[T_Response]

A class representing the response from an LLM function.

This class wraps the LLM response along with a parsed response, providing additional access to response metadata like duration and consumptions.

property response: T_Response#

Get the parsed response.

Returns:

The parsed response.

Return type:

T_Response

property duration: float#

Get the duration of the LLM function response.

Returns:

The time taken by the LLM function to produce the response, in seconds. This includes total duration of LLM function execution, including self-corrections if any.

Return type:

float

property consumptions: Sequence[Consumption]#

Get the consumptions associated with the LLM function response.

Returns:

A sequence of consumption objects if available; otherwise, an empty sequence.

Return type:

Sequence[Consumption]

static from_llm_response(llm_response: LLMResponse, llm_response_parser: Callable[[LLMResponse], T_Response], previous_responses: Sequence[LLMResponse]) LLMFunctionResponse[source]#

Create an instance of LLMFunctionResponse from a raw LLM response and a parser function.

Parameters:
  • llm_response (LLMResponse) – The raw response from the LLM.

  • llm_response_parser (LLMResponseParser) – A function that parses the LLM response into the desired format.

  • previous_responses (Sequence[LLMResponse]) – Prior LLM responses that could not be parsed successfully.

Returns:

A new instance of LLMFunctionResponse containing the parsed response.

Return type:

LLMFunctionResponse

LLMFunction#

class council.llm.LLMFunction(llm: LLMBase | LLMMiddlewareChain, response_parser: Callable[[LLMResponse], T_Response], system_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, max_retries: int = 3)[source]#

Bases: Generic[T_Response]

Represents a function that handles interactions with an LLM, including error handling and retries. It uses middleware to manage the requests and responses.

__init__(llm: LLMBase | LLMMiddlewareChain, response_parser: Callable[[LLMResponse], T_Response], system_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, max_retries: int = 3) None[source]#

Initializes the LLMFunction with a middleware chain, response parser, system_message / messages, and retry settings.

execute(user_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, **kwargs: Any) T_Response[source]#

Executes the LLM request with the provided user message and additional messages, handling errors and retries as configured.

Parameters:
  • user_message (Union[str, LLMMessage], optional) – The primary message from the user or an LLMMessage object.

  • messages (Iterable[LLMMessage], optional) – Additional messages to include in the request.

  • **kwargs – Additional keyword arguments to be passed to the LLMRequest.

Returns:

The response from the LLM after processing by the response parser.

Return type:

T_Response

Raises:

FunctionOutOfRetryError – If all retry attempts fail, this exception is raised with details.

execute_with_llm_response(user_message: str | LLMMessage | None = None, messages: Iterable[LLMMessage] | None = None, **kwargs: Any) LLMFunctionResponse[T_Response][source]#

Executes the LLM request with the provided user message and additional messages, handling errors and retries as configured.

Parameters:
  • user_message (Union[str, LLMMessage], optional) – The primary message from the user or an LLMMessage object.

  • messages (Iterable[LLMMessage], optional) – Additional messages to include in the request.

  • **kwargs – Additional keyword arguments to be passed to the LLMRequest.

Returns:

The response from the LLM and the one processed by the response parser.

Return type:

LLMFunctionResponse[T_Response]

Raises:

FunctionOutOfRetryError – If all retry attempts fail, this exception is raised with details.

Code Example#

Here’s how you can use LLMFunction for a sample SQL generation task.

Tip: You can simplify this example with council.llm.llm_response_parser.CodeBlocksResponseParser.

from __future__ import annotations

import os

# !pip install council-ai==0.0.24

from council import OpenAILLM
from council.llm import LLMParsingException, LLMResponse
from council.llm.llm_function import LLMFunction
from council.utils.code_parser import CodeParser

SYSTEM_PROMPT = """
You are a SQL expert producing SQL query to answer user question.

# Instructions
- Assess whether the question is reasonable and possible to solve
given the database schema.
- Follow `Response format` for output format
- Always use LIMIT in your SQL query

# Dataset info

The dataset contains information about Airbnb listings.

Table Name: listings

### Columns
For each column, the name and data type are given as follows:
{name}: {data type}
name: TEXT
price: INTEGER

# Response format

Your entire response must be inside the following code blocks.
All code blocks are mandatory.

```solved
True/False, indicating whether the task is solved based on the provided database schema
```

```sql
SQL query answering the question if the task could be solved; leave empty otherwise
```
"""


# Define a response type object with from_response() method
class SQLResult:
    def __init__(self, solved: bool, sql: str) -> None:
        self.solved = solved
        self.sql = sql

    @staticmethod
    def from_response(response: LLMResponse) -> SQLResult:
        response_str = response.value
        solved_block = CodeParser.find_first("solved", response_str)
        if solved_block is None:
            raise LLMParsingException("No `solved` code block found!")

        solved = solved_block.code.lower() == "true"
        if not solved:
            return SQLResult(solved=False, sql="")

        sql_block = CodeParser.find_first("sql", response_str)
        if sql_block is None:
            raise LLMParsingException("No `sql` code block found!")

        sql = sql_block.code

        if "limit" not in sql.lower():
            raise LLMParsingException("Generated SQL query should contain a LIMIT clause")

        return SQLResult(solved=True, sql=sql)


os.environ["OPENAI_API_KEY"] = "sk-YOUR-KEY-HERE"
os.environ["OPENAI_LLM_MODEL"] = "gpt-4o-mini-2024-07-18"
llm = OpenAILLM.from_env()

# Create a function based on LLM, response parser and system prompt
llm_function: LLMFunction[SQLResult] = LLMFunction(
    llm, SQLResult.from_response, SYSTEM_PROMPT
)

# Execute a function with user input
response = llm_function.execute(
    user_message="Show me first 5 rows of the dataset ordered by price"
)
print(type(response))
print(response.sql)

LLMFunctionError#

Exception raised when an error occurs during the execution of an LLMFunction.

class council.llm.LLMFunctionError(message: str, retryable: bool = False)[source]#

Bases: Exception

Exception raised when an error occurs during the execution of an LLMFunction.

__init__(message: str, retryable: bool = False) None[source]#

Initialize the FunctionError instance.

with_traceback()#

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

FunctionOutOfRetryError#

Exception raised when the maximum number of function execution retries is reached. Stores all previous exceptions raised during retry attempts.

class council.llm.FunctionOutOfRetryError(retry_count: int, exceptions: Sequence[Exception] | None = None)[source]#

Bases: LLMFunctionError

Exception raised when the maximum number of function execution retries is reached. Stores all previous exceptions raised during retry attempts.

__init__(retry_count: int, exceptions: Sequence[Exception] | None = None) None[source]#

Initialize the FunctionOutOfRetryException instance.

Parameters:
  • retry_count (int) – The number of retries attempted.

  • exceptions (List[Exception]) – List of exceptions raised during retry attempts.

with_traceback()#

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.