Files
derp/TODO.md
user c8879f6089 feat: add stable plugin API reference and bump to v2.0.0
Document the full public plugin surface (decorators, bot methods, IRC
primitives, state store, HTTP/DNS helpers) with semver stability
guarantees and breaking-change policy. Bump version from 0.1.0 to 2.0.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 19:22:47 +01:00

3.4 KiB

derp - Backlog

Core

  • Multi-server support (per-server config, shared plugins)
  • Stable plugin API (versioned, breaking change policy)
  • Paste overflow (auto-paste long output to FlaskPaste)
  • URL shortener integration (shorten URLs in subscription announcements)
  • Webhook listener (HTTP endpoint for push events to channels)
  • Granular ACLs (per-command: trusted, operator, admin)

LLM Bridge

Goal: let an LLM agent interact with the bot over IRC in real-time, with full machine access (bash, file ops, etc.).

Architecture

Owner addresses the bot on IRC. A bridge daemon reads addressed messages, feeds them to an LLM with tool access, and writes replies back through the bot. The bot already has owner config ([bot] owner hostmask patterns) to gate who can trigger LLM interactions.

IRC -> bot stdout (addressed msgs) -> bridge -> LLM API -> bridge -> bot inbox -> IRC

Approach options (ranked)

  1. Claude Code Agent SDK (clean, non-trivial)

    • Custom Python agent using anthropic SDK with tool_use
    • Define tools: bash exec, file read/write, web fetch
    • Persistent conversation memory across messages
    • Full control over the event loop -- real-time IRC is natural
    • Tradeoff: must implement and maintain tool definitions
  2. Claude Code CLI per-message (simple, stateless)

    • echo "user said X" | claude --print --allowedTools bash,read,write
    • Each invocation is a cold start with no conversation memory
    • Simple to implement but slow startup, no multi-turn context
    • Could pass conversation history via system prompt (fragile)
  3. Persistent Claude Code subprocess (hack, fragile)

    • Long-running claude process with stdin/stdout piped
    • Keeps context across messages within a session
    • Not designed for this -- output parsing is brittle
    • Session may drift or hit context limits

Bot-side plumbing needed

  • --llm CLI flag: route logging to file, stdout for addressed msgs
  • _is_addressed(): DM or nick-prefixed messages
  • _is_owner(): only owner hostmasks trigger LLM routing
  • Inbox file polling (/tmp/llm-inbox): bridge writes <target> <msg>
  • llm-send script: line splitting (400 char), FlaskPaste overflow
  • Stdout format: HH:MM [#chan] <nick> text / HH:MM --- status
  • Only LLM-originated replies echoed to stdout (not all bot output)

Previous attempt (reverted)

The --llm mode was implemented and tested (commit ea6f079, reverted in 6f1f4b2). The stdout/stdin plumbing worked but Claude Code CLI cannot act as a real-time daemon -- each tool call is a blocking round-trip, making interactive IRC conversation impractical. The code is preserved in git history for reference.

Plugins -- Security/OSINT

  • emailcheck -- SMTP VRFY/RCPT TO verification
  • canary -- canary token generator/tracker
  • virustotal -- hash/URL/IP/domain lookup (free API)
  • abuseipdb -- IP abuse confidence scoring (free tier)
  • jwt -- decode tokens, show claims/expiry, flag weaknesses
  • mac -- OUI vendor lookup (local IEEE database)
  • pastemoni -- monitor paste sites for keywords
  • internetdb -- Shodan InternetDB host recon (free, no API key)

Plugins -- Utility

  • paste -- manual paste to FlaskPaste
  • shorten -- manual URL shortening
  • cron -- scheduled bot commands on a timer

Testing

  • Plugin command unit tests (encode, hash, dns, cidr, defang)
  • CI pipeline (Gitea Actions)