LLMConfigurationBase#

class council.llm.LLMConfigurationBase[source]#

Bases: ABC

Configuration for OpenAI LLM Chat Completion GPT Model

property frequency_penalty: Parameter[float]#

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-frequency_penalty

property max_tokens: Parameter[int]#

Limit on number of tokens See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-max_tokens

property n: Parameter[int]#

How many completions to generate for each prompt. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-n

property presence_penalty: Parameter[float]#

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-presence_penalty

property temperature: Parameter[float]#

temperature settings for the LLM. Ranges from 0.0 to 2.0. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-temperature

property top_p: Parameter[float]#

The model only takes into account the tokens with the highest probability mass. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-top_p