Skip to main content
CogOS resolves configuration from multiple sources (highest priority wins):
  1. Environment variables (API_KEY, MODEL, BASE_URL)
  2. YAML config file (configs/cogos.yaml, created by cogos init)
  3. Dataclass defaults

Config File

Generated by cogos init:
llm:
  api_key: ""                            # or use API_KEY env var
  model: "gpt-4o"                        # any OpenAI-compatible model
  base_url: "https://api.openai.com/v1"  # endpoint URL
  request_delay: 0.5

agent:
  max_iterations: 5
  temperature: 0.0
  max_tokens: 3000
  disable_thinking: true
  schema_update_rounds: 0               # auto-update every N chat rounds (0 = disabled)

chatbot:
  max_tokens: 4096
  context_rounds: 10                    # recent chat rounds as context (0 = single-turn)

persistence:
  session_dir: "./sessions"
  schemas_dir: "./schemas"              # standalone schema files for cross-session sharing

templates:
  templates_dir: "./templates"           # user-defined template JSON files
  custom_templates_dir: "./templates/custom"  # user custom templates (git-ignored)
  schema_template: ""                    # auto-load on startup (e.g. "general")

server:
  host: "0.0.0.0"
  port: 8000

Config File Resolution

PriorityPathDescription
1Explicit path (-c / from_file("..."))User-specified
2configs/cogos.yamlUser config (created by cogos init)

Key Settings

Config KeyDescriptionDefault
chatbot.context_roundsNumber of recent chat rounds included as conversation context. When exceeded, schema memory is auto-injected. Set to 0 for single-turn.10
agent.schema_update_roundsAuto-update schema from chat context every N rounds. Set to 0 to disable.0
persistence.schemas_dirDirectory for standalone schema files shared across sessions. Backups are stored in backups/ subdirectory../schemas

Supported LLM Providers

Any OpenAI-compatible API works:
Providerbase_urlmodel example
OpenAIhttps://api.openai.com/v1gpt-4o
OpenRouterhttps://openrouter.ai/api/v1google/gemini-2.5-flash
vLLMhttp://localhost:8000/v1meta-llama/Llama-3-8b

Loading Config in Python

from cogos import CogOSConfig

config = CogOSConfig.from_file()                      # auto-resolve + env vars
config = CogOSConfig.from_file("configs/cogos.yaml")   # explicit path
config = CogOSConfig.from_env()                        # env vars only
config = CogOSConfig(model="gpt-4o")                   # direct construction