AnthropicLLMConfiguration#
- class council.llm.AnthropicLLMConfiguration(model: str, api_key: str, max_tokens: int)[source]#
Bases:
LLMConfigurationBase
Configuration for :class:AnthropicLLM
- __init__(model: str, api_key: str, max_tokens: int) None [source]#
Initialize a new instance
- Parameters:
model (str) – either claude-2 or claude-instant-1. More details https://docs.anthropic.com/claude/reference/selecting-a-model
api_key (str) – the api key
max_tokens (int) – The maximum number of tokens to generate before stopping.
- property api_key: Parameter[str]#
Anthropic API Key
- property max_tokens: Parameter[int]#
The maximum number of tokens to generate before stopping. Note that models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
- property model: Parameter[str]#
Anthropic model
- property temperature: Parameter[float]#
Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks.
- property timeout: Parameter[int]#
API timeout
- property top_k: Parameter[int]#
Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses.
- property top_p: Parameter[float]#
Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p.