Files
Timmy-time-dashboard/.env.example
Claude bb93697b92 feat: add Telegram bot integration
Bridges Telegram messages to Timmy via python-telegram-bot (optional
dependency). The bot token can be supplied through the TELEGRAM_TOKEN
env var or at runtime via the new POST /telegram/setup dashboard
endpoint, which (re)starts the bot without a restart.

Changes:
- src/telegram_bot/bot.py — TelegramBot singleton: token persistence
  (telegram_state.json), lifecycle (start/stop), /start command and
  message handler that forwards to Timmy
- src/dashboard/routes/telegram.py — /telegram/setup and /telegram/status
  FastAPI routes
- src/dashboard/app.py — register telegram router; auto-start/stop bot
  in lifespan hook
- src/config.py — TELEGRAM_TOKEN setting (pydantic-settings)
- pyproject.toml — [telegram] optional extra (python-telegram-bot>=21),
  telegram_bot wheel include
- .env.example — TELEGRAM_TOKEN section
- .gitignore — exclude telegram_state.json (contains token)
- tests/conftest.py — stub telegram/telegram.ext for offline test runs
- tests/test_telegram_bot.py — 16 tests covering token helpers,
  lifecycle, and all dashboard routes (370 total, all passing)

https://claude.ai/code/session_01CNBm3ZLobtx3Z1YogHq8ZS
2026-02-22 17:16:12 +00:00

41 lines
1.9 KiB
Plaintext

# Timmy Time — Mission Control
# Copy this file to .env and uncomment lines you want to override.
# .env is gitignored and never committed.
# Ollama host (default: http://localhost:11434)
# Override if Ollama is running on another machine or port.
# OLLAMA_URL=http://localhost:11434
# LLM model to use via Ollama (default: llama3.2)
# OLLAMA_MODEL=llama3.2
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
# DEBUG=true
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
# Inference backend: "ollama" (default) | "airllm" | "auto"
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
# Requires: pip install ".[bigbrain]"
# TIMMY_MODEL_BACKEND=ollama
# AirLLM model size (default: 70b).
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
# AIRLLM_MODEL_SIZE=70b
# ── L402 Lightning secrets ───────────────────────────────────────────────────
# HMAC secret for invoice verification. MUST be changed in production.
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
# L402_HMAC_SECRET=<your-secret-here>
# HMAC secret for macaroon signing. MUST be changed in production.
# L402_MACAROON_SECRET=<your-secret-here>
# Lightning backend: "mock" (default) | "lnd"
# LIGHTNING_BACKEND=mock
# ── Telegram bot ──────────────────────────────────────────────────────────────
# Bot token from @BotFather on Telegram.
# Alternatively, configure via the /telegram/setup dashboard endpoint at runtime.
# Requires: pip install ".[telegram]"
# TELEGRAM_TOKEN=