AzureChatGPTConfiguration#

class council.llm.AzureChatGPTConfiguration(api_key: str, api_base: str, deployment_name: str, model_name: str | None = None)[source]#

Bases: ChatGPTConfigurationBase

Configuration for :class:AzureLLM

Notes

https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#completions

__init__(api_key: str, api_base: str, deployment_name: str, model_name: str | None = None) None[source]#

Initialize a new instance of AzureChatGPTConfiguration :param api_key: the Azure api key :type api_key: str

property api_base: Parameter[str]#

API Base

property api_key: Parameter[str]#

Azure API Key

property api_version: Parameter[str]#

API Version The API version to use i.e. 2023-03-15-preview, 2023-05-15, 2023-06-01-preview

property deployment_name: Parameter[str]#

Azure deployment name

property frequency_penalty: Parameter[float]#

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-frequency_penalty

property max_tokens: Parameter[int]#

Limit on number of tokens See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-max_tokens

property n: Parameter[int]#

How many completions to generate for each prompt. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-n

property presence_penalty: Parameter[float]#

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Number between -2.0 and 2.0 See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-presence_penalty

property temperature: Parameter[float]#

temperature settings for the LLM. Ranges from 0.0 to 2.0. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-temperature

property timeout: Parameter[int]#

seconds to wait for response from Azure API before timing out

Type:

API timeout

property top_p: Parameter[float]#

The model only takes into account the tokens with the highest probability mass. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-top_p