OpenAIChatGPTConfiguration#

class council.llm.OpenAIChatGPTConfiguration(api_key: str, api_host: str, model: str, timeout: int | None = None)[source]#

Bases: ChatGPTConfigurationBase

Configuration for :class:OpenAILLM

Notes

__init__(api_key: str, api_host: str, model: str, timeout: int | None = None) None[source]#

Initialize a new instance of OpenAIChatGPTConfiguration :param api_key: the OpenAI api key :type api_key: str :param api_host: the OpenAI Host :type api_host: str :param model: model version to use :type model: str :param timeout: seconds to wait for response from OpenAI before timing out :type timeout: int

property api_host: Parameter[str]#

OpenAI API Host

property api_key: Parameter[str]#

OpenAI API Key

property frequency_penalty: Parameter[float]#

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-frequency_penalty

property max_tokens: Parameter[int]#

Limit on number of tokens See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-max_tokens

property model: Parameter[str]#

OpenAI model

property n: Parameter[int]#

How many completions to generate for each prompt. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-n

property presence_penalty: Parameter[float]#

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-presence_penalty

property temperature: Parameter[float]#

temperature settings for the LLM. Ranges from 0.0 to 2.0. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-temperature

property timeout: Parameter[int]#

API timeout

property top_p: Parameter[float]#

The model only takes into account the tokens with the highest probability mass. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-top_p