Files
ADHDbot/README.md
2025-11-09 21:06:26 -06:00

3.0 KiB
Raw Blame History

ADHDbot

Quick Start

  1. Copy the example environment file and fill in your secrets:
    cp .env.example .env
    # edit .env to insert your real OPENROUTER_API_KEY, DISCORD_BOT_TOKEN, TARGET_USER_ID, etc.
    
  2. Bring up the stack with docker-compose (recommended; includes host persistence for logs/notes):
    docker compose up -d --build
    
    • ./memory is bind-mounted into the container (./memory:/app/memory), so any saved notes appear in the repo directly.
    • .env is auto-loaded and the FastAPI service is exposed on http://localhost:8000.
  3. Or build/run manually if you prefer the raw Docker commands:
    docker build -t adhdbot .
    docker run --rm -p 8000:8000 --env-file .env -v "$PWD/memory:/app/memory" adhdbot
    

API usage

Once the container is running, hit the API to trigger a prompt flow:

curl -X POST http://localhost:8000/run \
  -H "Content-Type: application/json" \
  -d '{
    "userId": "chelsea",
    "category": "general",
    "promptName": "welcome",
    "context": "Take a note that the user is testing the system you're being called from"
  }'

Endpoints:

  • GET /health simple liveness check.
  • POST /run triggers Runner.run; pass userId, category, promptName, and context to override defaults from .env.

Environment variables of interest (see .env.example):

  • OPENROUTER_API_KEY OpenRouter key used by AIInteraction.
  • DISCORD_BOT_TOKEN / TARGET_USER_ID / DISCORD_WEBHOOK_URL Discord plumbing.
  • PROMPT_CATEGORY, PROMPT_NAME, PROMPT_CONTEXT defaults for the /run endpoint.
  • LOG_PROMPTS (default 1) when truthy, every outgoing prompt is logged to stdout so you can audit the final instructions sent to the LLM.

Prompt + tooling customization

  • All templates live in prompts/defaultPrompts.json (and sibling files). Edit them and restart the service to take effect.
  • Shared tooling instructions live in prompts/tool_instructions.md. AIInteraction injects this file both into the system prompt and at the end of every user prompt, so any changes immediately affect how models emit take_note, store_task, or schedule_reminder JSON payloads.
  • PROMPTS.md documents each category plus examples of the structured JSON outputs that downstream services can parse.

Memory + notes

  • The memory subsystem watches LLM responses for fenced ```json payloads. When it sees {"action": "take_note", ...} it writes to memory/<user>_memory.json (now persisted on the host via the compose volume).
  • Each entry includes the note text, UTC timestamp, and the raw metadata payload, so other services can build summaries or downstream automations from the same file.

Debugging tips

  • Tail the container logs with docker compose logs -f adhdbot to see:
    • The final prompt (with tooling contract) sent to the model.
    • Memory ingestion messages like [memory] Recorded note for <user>: ....
  • If you swap models, change openRouterModel in AIInteraction.py (or surface it via env) and rebuild the container.