LLMConfigObject#

classDiagram DataObject <|-- LLMConfigObject
class council.llm.LLMConfigObject(kind: str, version: str, metadata: DataObjectMetadata, spec: T)[source]#

Bases: DataObject[LLMConfigSpec]

Helper class to instantiate an LLM from a YAML file

Code Example#

The following code illustrates the way an LLM could be loaded from a YAML file with a specific provider.

from council.llm import OpenAILLM, LLMConfigObject

llm_config = LLMConfigObject.from_yaml("data/configs/llm-config-openai.yaml")
llm = OpenAILLM.from_config(llm_config)

Or use council.llm.get_llm_from_config to determine provider class automatically based on config file.

from council.llm import get_llm_from_config

llm = get_llm_from_config("data/configs/llm-config-openai.yaml")

OpenAI Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: an-openai-deployed-model
  labels:
    provider: OpenAI
spec:
  description: "Model used to do ABC"
  provider:
    name: OpenAI
    openAISpec:
      # https://platform.openai.com/docs/models/
      model: gpt-4o-mini-2024-07-18
      timeout: 60
      # specify API key directly
      apiKey: sk-my-api-key
      # or use environment variable (recommended)
#      apiKey:
#        fromEnvVar: OPENAI_API_KEY
  parameters:
    n: 1
    temperature: 0

Anthropic Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: an-anthropic-deployed-model
  labels:
    provider: Anthropic
spec:
  description: "Model used to do RST"
  provider:
    name: Anthropic
    anthropicSpec:
      # https://docs.anthropic.com/en/docs/about-claude/models
      model: claude-3-haiku-20240307
      timeout: 60
      maxTokens: 1024
      apiKey:
        fromEnvVar: ANTHROPIC_API_KEY
  parameters:
    n: 1
    temperature: 0

Gemini Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: an-gemini-deployed-model
  labels:
    provider: Google
spec:
  description: "Model used to do RST"
  provider:
    name: Gemini
    googleGeminiSpec:
      # https://ai.google.dev/gemini-api/docs/models/gemini
      model: gemini-1.5-flash
      apiKey:
        fromEnvVar: GEMINI_API_KEY
  parameters:
    n: 1
    temperature: 0

Groq Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: a-groq-deployed-model
  labels:
    provider: Groq
spec:
  description: "Model used to do UVW"
  provider:
    name: Groq
    groqSpec:
      # https://console.groq.com/docs/models
      model: llama-3.2-1b-preview
      apiKey:
        fromEnvVar: GROQ_API_KEY
  parameters:
    maxTokens: 128
    seed: 42
    temperature: 0

Azure Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: an-azure-deployed-model
  labels:
    provider: Azure
spec:
  description: "Model used to do XYZ"
  provider:
    name: Azure
    azureSpec:
      deploymentName: gpt-35-turbo
      apiVersion: "2023-05-15"
      apiBase:
        fromEnvVar: AZURE_LLM_API_BASE
      timeout: 90
      apiKey:
        fromEnvVar: AZURE_LLM_API_KEY

Ollama Config Example#

kind: LLMConfig
version: 0.1
metadata:
  name: a-local-deployed-model
  labels:
    provider: Ollama
spec:
  description: "Model used to do ABC"
  provider:
    name: Ollama
    ollamaSpec:
      # https://ollama.com/library
      model: llama3.2
      keep_alive: 300  # seconds to keep model in memory
      json_mode: false
  parameters:
    temperature: 0
    seed: 42
    numCtx: 2048  # context window
    numPredict: 128  # number of tokens to predict

Fallback Config Example#

Note that provider and fallbackProvider can be any providers from above.

kind: LLMConfig
version: 0.1
metadata:
  name: an-deployed-model-with-fallback
spec:
  description: "Model used to do ABC"
  provider:
    name: Anthropic
    anthropicSpec:
      model: claude-3-haiku-20240307
      timeout: 60
      maxTokens: 1024
      apiKey:
        fromEnvVar: ANTHROPIC_API_KEY
  fallbackProvider:
    name: OpenAI
    openAISpec:
      model: gpt-4o-mini-2024-07-18
      timeout: 60
      apiKey:
        fromEnvVar: OPENAI_API_KEY
  parameters:
    n: 1
    temperature: 0