GeminiLLMConfiguration#

class council.llm.GeminiLLMConfiguration(model: str, api_key: str)[source]#

Bases: LLMConfigurationBase

__init__(model: str, api_key: str) None[source]#

Initialize a new instance

Parameters:
  • api_key (str) – the api key

  • model (str) –

property api_key: Parameter[str]#

Gemini API Key

property model: Parameter[str]#

Gemini model

property temperature: Parameter[float]#

Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks.

property top_k: Parameter[int]#

Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses.

property top_p: Parameter[float]#

Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p.