- USAGE.md: permission tiers section, webhook config/API/example - CHEATSHEET.md: ACL tiers and webhook quick-ref sections - ROADMAP.md: mark webhook and ACL items done - TODO.md: mark webhook and ACL items done - TASKS.md: new sprint for ACL + webhook work
3.4 KiB
derp - Backlog
Core
- Multi-server support (per-server config, shared plugins)
- Stable plugin API (versioned, breaking change policy)
- Paste overflow (auto-paste long output to FlaskPaste)
- URL shortener integration (shorten URLs in subscription announcements)
- Webhook listener (HTTP endpoint for push events to channels)
- Granular ACLs (per-command: trusted, operator, admin)
LLM Bridge
Goal: let an LLM agent interact with the bot over IRC in real-time, with full machine access (bash, file ops, etc.).
Architecture
Owner addresses the bot on IRC. A bridge daemon reads addressed
messages, feeds them to an LLM with tool access, and writes replies
back through the bot. The bot already has owner config
([bot] owner hostmask patterns) to gate who can trigger LLM
interactions.
IRC -> bot stdout (addressed msgs) -> bridge -> LLM API -> bridge -> bot inbox -> IRC
Approach options (ranked)
-
Claude Code Agent SDK (clean, non-trivial)
- Custom Python agent using
anthropicSDK withtool_use - Define tools: bash exec, file read/write, web fetch
- Persistent conversation memory across messages
- Full control over the event loop -- real-time IRC is natural
- Tradeoff: must implement and maintain tool definitions
- Custom Python agent using
-
Claude Code CLI per-message (simple, stateless)
echo "user said X" | claude --print --allowedTools bash,read,write- Each invocation is a cold start with no conversation memory
- Simple to implement but slow startup, no multi-turn context
- Could pass conversation history via system prompt (fragile)
-
Persistent Claude Code subprocess (hack, fragile)
- Long-running
claudeprocess with stdin/stdout piped - Keeps context across messages within a session
- Not designed for this -- output parsing is brittle
- Session may drift or hit context limits
- Long-running
Bot-side plumbing needed
--llmCLI flag: route logging to file, stdout for addressed msgs_is_addressed(): DM or nick-prefixed messages_is_owner(): only owner hostmasks trigger LLM routing- Inbox file polling (
/tmp/llm-inbox): bridge writes<target> <msg> llm-sendscript: line splitting (400 char), FlaskPaste overflow- Stdout format:
HH:MM [#chan] <nick> text/HH:MM --- status - Only LLM-originated replies echoed to stdout (not all bot output)
Previous attempt (reverted)
The --llm mode was implemented and tested (commit ea6f079, reverted
in 6f1f4b2). The stdout/stdin plumbing worked but Claude Code CLI
cannot act as a real-time daemon -- each tool call is a blocking
round-trip, making interactive IRC conversation impractical. The code
is preserved in git history for reference.
Plugins -- Security/OSINT
emailcheck-- SMTP VRFY/RCPT TO verificationcanary-- canary token generator/trackervirustotal-- hash/URL/IP/domain lookup (free API)abuseipdb-- IP abuse confidence scoring (free tier)jwt-- decode tokens, show claims/expiry, flag weaknessesmac-- OUI vendor lookup (local IEEE database)pastemoni-- monitor paste sites for keywordsinternetdb-- Shodan InternetDB host recon (free, no API key)
Plugins -- Utility
paste-- manual paste to FlaskPasteshorten-- manual URL shorteningcron-- scheduled bot commands on a timer
Testing
- Plugin command unit tests (encode, hash, dns, cidr, defang)
- CI pipeline (Gitea Actions)