AnthropicLLMConfiguration#

class council.llm.AnthropicLLMConfiguration(model: str, api_key: str, max_tokens: int)[source]#

Bases: LLMConfigurationBase

Configuration for :class:AnthropicLLM

__init__(model: str, api_key: str, max_tokens: int) None[source]#

Initialize a new instance

Parameters:
property api_key: Parameter[str]#

Anthropic API Key

property max_tokens: Parameter[int]#

The maximum number of tokens to generate before stopping. Note that models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

property model: Parameter[str]#

Anthropic model

model_name() str[source]#

Anthropic model name

property temperature: Parameter[float]#

Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks.

property timeout: Parameter[int]#

API timeout

property top_k: Parameter[int]#

Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses.

property top_p: Parameter[float]#

Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p.