feat: metadata enrichment for alerts and subscription plugins
Alert backends now populate structured `extra` field with engagement metrics (views, stars, votes, etc.) instead of embedding them in titles. Subscription plugins show richer announcements: Twitch viewer counts, YouTube views/likes/dates, RSS published dates. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
26
TASKS.md
26
TASKS.md
@@ -1,6 +1,30 @@
|
||||
# derp - Tasks
|
||||
|
||||
## Current Sprint -- v1.2.5 Paste Site Keyword Monitor (2026-02-18)
|
||||
## Current Sprint -- v1.2.7 Subscription Plugin Enrichment (2026-02-19)
|
||||
|
||||
| Pri | Status | Task |
|
||||
|-----|--------|------|
|
||||
| P0 | [x] | Twitch: viewer count in live announcements (`| 50k viewers`) |
|
||||
| P0 | [x] | YouTube: views, likes, published date in announcements (`| 1.5Mv 45klk 2026-01-15`) |
|
||||
| P0 | [x] | RSS: published date in announcements (`| 2026-02-10`) |
|
||||
| P1 | [x] | Twitch `check`/`list` show viewer count |
|
||||
| P1 | [x] | RSS `_parse_date` helper (ISO + RFC 2822) |
|
||||
| P2 | [x] | Tests: twitch/youtube/rss enrichment (263 sub-plugin tests, 868 total) |
|
||||
| P2 | [x] | Documentation update (USAGE.md announcement formats) |
|
||||
|
||||
## Previous Sprint -- v1.2.6 Alert Backend Metadata Enrichment (2026-02-18)
|
||||
|
||||
| Pri | Status | Task |
|
||||
|-----|--------|------|
|
||||
| P0 | [x] | `_compact_num` helper (1k/1.2M formatting) |
|
||||
| P0 | [x] | DB migration: `extra` column in results table |
|
||||
| P0 | [x] | Backend metadata: 15 backends populate `extra` field |
|
||||
| P1 | [x] | Move engagement metrics from titles to `extra` (HN, GH, GL, SE, DH, HF, KK) |
|
||||
| P1 | [x] | Display: announcements, history, info show `| extra` suffix |
|
||||
| P2 | [x] | Tests: `TestCompactNum`, extra in poll/history/info (91 total) |
|
||||
| P2 | [x] | Documentation update (USAGE.md metadata table) |
|
||||
|
||||
## Previous Sprint -- v1.2.5 Paste Site Keyword Monitor (2026-02-18)
|
||||
|
||||
| Pri | Status | Task |
|
||||
|-----|--------|------|
|
||||
|
||||
@@ -572,7 +572,8 @@ Polling and announcements:
|
||||
|
||||
- Feeds are polled every 10 minutes by default
|
||||
- On `add`, existing items are recorded without announcing (prevents flood)
|
||||
- New items are announced as `[name] title -- link`
|
||||
- New items are announced as `[name] title | YYYY-MM-DD -- link`
|
||||
- Published date is included when available (RSS `pubDate`, Atom `published`)
|
||||
- Maximum 5 items announced per poll; excess shown as `... and N more`
|
||||
- Titles are truncated to 80 characters
|
||||
- Supports HTTP conditional requests (`ETag`, `If-Modified-Since`)
|
||||
@@ -608,7 +609,8 @@ Polling and announcements:
|
||||
|
||||
- Channels are polled every 10 minutes by default
|
||||
- On `follow`, existing videos are recorded without announcing (prevents flood)
|
||||
- New videos are announced as `[name] Video Title -- https://www.youtube.com/watch?v=ID`
|
||||
- New videos are announced as `[name] Video Title | 1.5Mv 45klk 2026-01-15 -- URL`
|
||||
- Metadata suffix includes views, likes, and published date when available
|
||||
- Maximum 5 videos announced per poll; excess shown as `... and N more`
|
||||
- Titles are truncated to 80 characters
|
||||
- Supports HTTP conditional requests (`ETag`, `If-Modified-Since`)
|
||||
@@ -639,11 +641,12 @@ Polling and announcements:
|
||||
- Streamers are polled every 2 minutes by default
|
||||
- On `follow`, the current stream state is recorded without announcing
|
||||
- Announcements fire on state transitions: offline to live, or new stream ID
|
||||
- Format: `[name] is live: Stream Title (Game) -- https://twitch.tv/login`
|
||||
- Game is omitted if not set; titles are truncated to 80 characters
|
||||
- Format: `[name] is live: Stream Title (Game) | 50k viewers -- https://twitch.tv/login`
|
||||
- Game is omitted if not set; viewer count shown when available
|
||||
- Titles are truncated to 80 characters
|
||||
- 5 consecutive errors doubles the poll interval (max 1 hour)
|
||||
- Subscriptions persist across bot restarts via `bot.state`
|
||||
- `list` shows live/error status indicators next to each streamer
|
||||
- `list` shows live/error status with viewer count: `name (live, 50k)`
|
||||
- `check` forces an immediate poll and reports current status
|
||||
|
||||
### `!searx` -- SearXNG Web Search
|
||||
@@ -720,6 +723,26 @@ Platforms searched:
|
||||
- **Medium** (`md`) -- Tag-based RSS feed (no auth required)
|
||||
- **Hugging Face** (`hf`) -- Model search API, sorted by downloads (no auth required)
|
||||
|
||||
Backend metadata (shown as `| extra` suffix on titles):
|
||||
|
||||
| Tag | Metrics | Example |
|
||||
|-----|---------|---------|
|
||||
| `tw` | viewers / views | `500 viewers`, `1k views` |
|
||||
| `rd` | score, comments | `+127 42c` |
|
||||
| `ft` | reblogs, favourites | `3rb 12fav` |
|
||||
| `bs` | likes, reposts | `5lk 2rp` |
|
||||
| `ly` | score, comments | `+15 3c` |
|
||||
| `kk` | viewers | `500 viewers` |
|
||||
| `dm` | views | `1.2k views` |
|
||||
| `pt` | views, likes | `120v 5lk` |
|
||||
| `hn` | points, comments | `127pt 42c` |
|
||||
| `gh` | stars, forks | `42* 5fk` |
|
||||
| `gl` | stars, forks | `42* 5fk` |
|
||||
| `se` | score, answers, views | `+5 3a 1.2kv` |
|
||||
| `dh` | stars, pulls | `42* 1.2M pulls` |
|
||||
| `hf` | downloads, likes | `500dl 12lk` |
|
||||
| `dv` | reactions, comments | `+15 3c` |
|
||||
|
||||
Polling and announcements:
|
||||
|
||||
- Alerts are polled every 5 minutes by default
|
||||
@@ -727,7 +750,7 @@ Polling and announcements:
|
||||
background to avoid flooding
|
||||
- New results announced as two lines:
|
||||
- ACTION: `* derp [name/<tag>/<id>] date - URL`
|
||||
- PRIVMSG: full uncropped title/content
|
||||
- PRIVMSG: `title | extra` (title with compact engagement metrics when available)
|
||||
- Tags: `yt`, `tw`, `sx`, `rd`, `ft`, `dg`, `gn`, `kk`, `dm`, `pt`, `bs`, `ly`,
|
||||
`od`, `ia`, `hn`, `gh`, `wp`, `se`, `gl`, `nm`, `pp`, `dh`, `ax`, `lb`, `dv`,
|
||||
`md`, `hf` -- `<id>` is a short deterministic ID for use with `!alert info`
|
||||
|
||||
166
plugins/alert.py
166
plugins/alert.py
@@ -148,6 +148,7 @@ def _db() -> sqlite3.Connection:
|
||||
for col, default in [
|
||||
("short_id", "''"),
|
||||
("short_url", "''"),
|
||||
("extra", "''"),
|
||||
]:
|
||||
try:
|
||||
_conn.execute(
|
||||
@@ -181,8 +182,8 @@ def _save_result(channel: str, alert: str, backend: str, item: dict,
|
||||
db.execute(
|
||||
"INSERT INTO results"
|
||||
" (channel, alert, backend, item_id, title, url, date, found_at,"
|
||||
" short_id, short_url)"
|
||||
" VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
" short_id, short_url, extra)"
|
||||
" VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
channel,
|
||||
alert,
|
||||
@@ -194,6 +195,7 @@ def _save_result(channel: str, alert: str, backend: str, item: dict,
|
||||
datetime.now(timezone.utc).isoformat(),
|
||||
short_id,
|
||||
short_url,
|
||||
item.get("extra", ""),
|
||||
),
|
||||
)
|
||||
db.commit()
|
||||
@@ -286,6 +288,15 @@ class _DDGParser(HTMLParser):
|
||||
self.results.append((self._url, title))
|
||||
|
||||
|
||||
def _compact_num(n: int) -> str:
|
||||
"""Format large numbers compactly: 1234 -> 1.2k, 1234567 -> 1.2M."""
|
||||
if n >= 1_000_000:
|
||||
return f"{n / 1_000_000:.1f}M".replace(".0M", "M")
|
||||
if n >= 1_000:
|
||||
return f"{n / 1_000:.1f}k".replace(".0k", "k")
|
||||
return str(n)
|
||||
|
||||
|
||||
def _make_short_id(backend: str, item_id: str) -> str:
|
||||
"""Deterministic 8-char base36 hash from backend:item_id."""
|
||||
digest = hashlib.sha256(f"{backend}:{item_id}".encode()).digest()
|
||||
@@ -469,12 +480,14 @@ def _search_twitch(keyword: str) -> list[dict]:
|
||||
line = f"{display} is live: {title}"
|
||||
if game:
|
||||
line += f" ({game})"
|
||||
viewers = item.get("viewersCount", 0)
|
||||
extra = f"{_compact_num(viewers)} viewers" if viewers else ""
|
||||
results.append({
|
||||
"id": f"stream:{stream_id}",
|
||||
"title": line,
|
||||
"url": f"https://twitch.tv/{login}",
|
||||
"date": "",
|
||||
"extra": "",
|
||||
"extra": extra,
|
||||
})
|
||||
|
||||
# VODs
|
||||
@@ -484,12 +497,14 @@ def _search_twitch(keyword: str) -> list[dict]:
|
||||
if not vod_id:
|
||||
continue
|
||||
title = item.get("title", "")
|
||||
views = item.get("viewCount", 0)
|
||||
extra = f"{_compact_num(views)} views" if views else ""
|
||||
results.append({
|
||||
"id": f"vod:{vod_id}",
|
||||
"title": title,
|
||||
"url": f"https://twitch.tv/videos/{vod_id}",
|
||||
"date": "",
|
||||
"extra": "",
|
||||
"extra": extra,
|
||||
})
|
||||
|
||||
return results
|
||||
@@ -579,12 +594,19 @@ def _search_reddit(keyword: str) -> list[dict]:
|
||||
).strftime("%Y-%m-%d")
|
||||
except (ValueError, OSError):
|
||||
pass
|
||||
score = post.get("score", 0)
|
||||
num_comments = post.get("num_comments", 0)
|
||||
parts = []
|
||||
if score:
|
||||
parts.append(f"+{_compact_num(score)}")
|
||||
if num_comments:
|
||||
parts.append(f"{_compact_num(num_comments)}c")
|
||||
results.append({
|
||||
"id": post_id,
|
||||
"title": title,
|
||||
"url": f"https://www.reddit.com{permalink}" if permalink else "",
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -622,12 +644,19 @@ def _search_mastodon(keyword: str) -> list[dict]:
|
||||
acct = (status.get("account") or {}).get("acct", "")
|
||||
content = _strip_html(status.get("content", ""))
|
||||
title = f"@{acct}: {content}" if acct else content
|
||||
reblogs = status.get("reblogs_count", 0)
|
||||
favs = status.get("favourites_count", 0)
|
||||
parts = []
|
||||
if reblogs:
|
||||
parts.append(f"{_compact_num(reblogs)}rb")
|
||||
if favs:
|
||||
parts.append(f"{_compact_num(favs)}fav")
|
||||
items.append({
|
||||
"id": status_url,
|
||||
"title": title,
|
||||
"url": status_url,
|
||||
"date": _parse_date(status.get("created_at", "")),
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return items
|
||||
|
||||
@@ -783,15 +812,13 @@ def _search_kick(keyword: str) -> list[dict]:
|
||||
channel = stream.get("channel") or {}
|
||||
slug = channel.get("slug", "")
|
||||
viewers = stream.get("viewer_count", 0)
|
||||
title = session_title
|
||||
if viewers:
|
||||
title += f" ({viewers} viewers)"
|
||||
extra = f"{_compact_num(viewers)} viewers" if viewers else ""
|
||||
results.append({
|
||||
"id": f"live:{stream_id}",
|
||||
"title": title,
|
||||
"title": session_title,
|
||||
"url": f"https://kick.com/{slug}" if slug else "",
|
||||
"date": _parse_date(stream.get("start_time", "")),
|
||||
"extra": "",
|
||||
"extra": extra,
|
||||
})
|
||||
|
||||
return results
|
||||
@@ -807,7 +834,7 @@ def _search_dailymotion(keyword: str) -> list[dict]:
|
||||
"search": keyword,
|
||||
"sort": "recent",
|
||||
"limit": "25",
|
||||
"fields": "id,title,url,created_time",
|
||||
"fields": "id,title,url,created_time,views_total",
|
||||
})
|
||||
url = f"{_DAILYMOTION_API}?{params}"
|
||||
|
||||
@@ -833,12 +860,14 @@ def _search_dailymotion(keyword: str) -> list[dict]:
|
||||
).strftime("%Y-%m-%d")
|
||||
except (ValueError, OSError):
|
||||
pass
|
||||
views = item.get("views_total", 0)
|
||||
extra = f"{_compact_num(views)} views" if views else ""
|
||||
results.append({
|
||||
"id": video_id,
|
||||
"title": title,
|
||||
"url": video_url,
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": extra,
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -872,12 +901,19 @@ def _search_peertube(keyword: str) -> list[dict]:
|
||||
name = video.get("name", "")
|
||||
acct = (video.get("account") or {}).get("displayName", "")
|
||||
title = f"{acct}: {name}" if acct else name
|
||||
views = video.get("views", 0)
|
||||
likes = video.get("likes", 0)
|
||||
parts = []
|
||||
if views:
|
||||
parts.append(f"{_compact_num(views)}v")
|
||||
if likes:
|
||||
parts.append(f"{_compact_num(likes)}lk")
|
||||
items.append({
|
||||
"id": video_url,
|
||||
"title": title,
|
||||
"url": video_url,
|
||||
"date": _parse_date(video.get("publishedAt", "")),
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return items
|
||||
|
||||
@@ -923,12 +959,19 @@ def _search_bluesky(keyword: str) -> list[dict]:
|
||||
title = f"@{display}: {text}"
|
||||
date = _parse_date(record.get("createdAt", ""))
|
||||
post_url = f"https://bsky.app/profile/{handle}/post/{rkey}" if handle else ""
|
||||
like_count = post.get("likeCount", 0)
|
||||
repost_count = post.get("repostCount", 0)
|
||||
parts = []
|
||||
if like_count:
|
||||
parts.append(f"{_compact_num(like_count)}lk")
|
||||
if repost_count:
|
||||
parts.append(f"{_compact_num(repost_count)}rp")
|
||||
results.append({
|
||||
"id": uri,
|
||||
"title": title,
|
||||
"url": post_url,
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -965,12 +1008,20 @@ def _search_lemmy(keyword: str) -> list[dict]:
|
||||
community = (entry.get("community") or {}).get("name", "")
|
||||
title = f"{community}: {name}" if community else name
|
||||
post_url = post.get("url") or ap_id
|
||||
counts = entry.get("counts") or {}
|
||||
score = counts.get("score", 0)
|
||||
comments = counts.get("comments", 0)
|
||||
parts = []
|
||||
if score:
|
||||
parts.append(f"+{_compact_num(score)}")
|
||||
if comments:
|
||||
parts.append(f"{_compact_num(comments)}c")
|
||||
items.append({
|
||||
"id": ap_id,
|
||||
"title": title,
|
||||
"url": post_url,
|
||||
"date": _parse_date(post.get("published", "")),
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return items
|
||||
|
||||
@@ -1116,15 +1167,19 @@ def _search_hackernews(keyword: str) -> list[dict]:
|
||||
# External URL if available, otherwise HN discussion link
|
||||
item_url = hit.get("url") or f"https://news.ycombinator.com/item?id={object_id}"
|
||||
date = _parse_date(hit.get("created_at", ""))
|
||||
points = hit.get("points")
|
||||
points = hit.get("points", 0)
|
||||
num_comments = hit.get("num_comments", 0)
|
||||
parts = []
|
||||
if points:
|
||||
title += f" ({points}pts)"
|
||||
parts.append(f"{_compact_num(points)}pt")
|
||||
if num_comments:
|
||||
parts.append(f"{_compact_num(num_comments)}c")
|
||||
results.append({
|
||||
"id": object_id,
|
||||
"title": title,
|
||||
"url": item_url,
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1158,18 +1213,22 @@ def _search_github(keyword: str) -> list[dict]:
|
||||
description = repo.get("description") or ""
|
||||
html_url = repo.get("html_url", "")
|
||||
stars = repo.get("stargazers_count", 0)
|
||||
forks = repo.get("forks_count", 0)
|
||||
title = full_name
|
||||
if description:
|
||||
title += f": {description}"
|
||||
parts = []
|
||||
if stars:
|
||||
title += f" [{stars}*]"
|
||||
parts.append(f"{_compact_num(stars)}*")
|
||||
if forks:
|
||||
parts.append(f"{_compact_num(forks)}fk")
|
||||
date = _parse_date(repo.get("updated_at", ""))
|
||||
results.append({
|
||||
"id": repo_id,
|
||||
"title": title,
|
||||
"url": html_url,
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1248,8 +1307,15 @@ def _search_stackexchange(keyword: str) -> list[dict]:
|
||||
title = _strip_html(item.get("title", ""))
|
||||
link = item.get("link", "")
|
||||
score = item.get("score", 0)
|
||||
answer_count = item.get("answer_count", 0)
|
||||
view_count = item.get("view_count", 0)
|
||||
parts = []
|
||||
if score:
|
||||
title += f" [{score}v]"
|
||||
parts.append(f"+{_compact_num(score)}")
|
||||
if answer_count:
|
||||
parts.append(f"{_compact_num(answer_count)}a")
|
||||
if view_count:
|
||||
parts.append(f"{_compact_num(view_count)}v")
|
||||
created = item.get("creation_date")
|
||||
date = ""
|
||||
if created:
|
||||
@@ -1261,7 +1327,7 @@ def _search_stackexchange(keyword: str) -> list[dict]:
|
||||
pass
|
||||
results.append({
|
||||
"id": qid, "title": title, "url": link,
|
||||
"date": date, "extra": "",
|
||||
"date": date, "extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1295,15 +1361,19 @@ def _search_gitlab(keyword: str) -> list[dict]:
|
||||
description = repo.get("description") or ""
|
||||
web_url = repo.get("web_url", "")
|
||||
stars = repo.get("star_count", 0)
|
||||
forks = repo.get("forks_count", 0)
|
||||
title = name
|
||||
if description:
|
||||
title += f": {description}"
|
||||
parts = []
|
||||
if stars:
|
||||
title += f" [{stars}*]"
|
||||
parts.append(f"{_compact_num(stars)}*")
|
||||
if forks:
|
||||
parts.append(f"{_compact_num(forks)}fk")
|
||||
date = _parse_date(repo.get("last_activity_at", ""))
|
||||
results.append({
|
||||
"id": rid, "title": title, "url": web_url,
|
||||
"date": date, "extra": "",
|
||||
"date": date, "extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1408,18 +1478,22 @@ def _search_dockerhub(keyword: str) -> list[dict]:
|
||||
continue
|
||||
description = item.get("short_description") or ""
|
||||
stars = item.get("star_count", 0)
|
||||
pulls = item.get("pull_count", 0)
|
||||
title = name
|
||||
if description:
|
||||
title += f": {description}"
|
||||
parts = []
|
||||
if stars:
|
||||
title += f" [{stars}*]"
|
||||
parts.append(f"{_compact_num(stars)}*")
|
||||
if pulls:
|
||||
parts.append(f"{_compact_num(pulls)} pulls")
|
||||
hub_url = (
|
||||
f"https://hub.docker.com/r/{name}" if "/" in name
|
||||
else f"https://hub.docker.com/_/{name}"
|
||||
)
|
||||
results.append({
|
||||
"id": name, "title": title, "url": hub_url,
|
||||
"date": "", "extra": "",
|
||||
"date": "", "extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1574,10 +1648,17 @@ def _search_devto(keyword: str) -> list[dict]:
|
||||
author = ""
|
||||
if author:
|
||||
title = f"{author}: {title}"
|
||||
reactions = item.get("positive_reactions_count", 0)
|
||||
comments = item.get("comments_count", 0)
|
||||
parts = []
|
||||
if reactions:
|
||||
parts.append(f"+{_compact_num(reactions)}")
|
||||
if comments:
|
||||
parts.append(f"{_compact_num(comments)}c")
|
||||
date = _parse_date(item.get("published_at", ""))
|
||||
results.append({
|
||||
"id": article_id, "title": title, "url": article_url,
|
||||
"date": date, "extra": "",
|
||||
"date": date, "extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1656,17 +1737,18 @@ def _search_huggingface(keyword: str) -> list[dict]:
|
||||
downloads = model.get("downloads", 0)
|
||||
likes = model.get("likes", 0)
|
||||
title = model_id
|
||||
parts = []
|
||||
if downloads:
|
||||
title += f" [{downloads} dl]"
|
||||
elif likes:
|
||||
title += f" [{likes} likes]"
|
||||
parts.append(f"{_compact_num(downloads)}dl")
|
||||
if likes:
|
||||
parts.append(f"{_compact_num(likes)}lk")
|
||||
date = _parse_date(model.get("lastModified", ""))
|
||||
results.append({
|
||||
"id": model_id,
|
||||
"title": title,
|
||||
"url": f"https://huggingface.co/{model_id}",
|
||||
"date": date,
|
||||
"extra": "",
|
||||
"extra": " ".join(parts),
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -1836,6 +1918,9 @@ async def _poll_once(bot, key: str, announce: bool = True) -> None:
|
||||
channel, name, tag, item, short_url=short_url,
|
||||
)
|
||||
title = item["title"] or "(no title)"
|
||||
extra = item.get("extra", "")
|
||||
if extra:
|
||||
title = f"{title} | {extra}"
|
||||
date = item.get("date", "")
|
||||
meta = f"[{name}/{tag}/{short_id}]"
|
||||
if date:
|
||||
@@ -2003,7 +2088,7 @@ async def cmd_alert(bot, message):
|
||||
db = _db()
|
||||
rows = db.execute(
|
||||
"SELECT id, backend, title, url, date, found_at, short_id,"
|
||||
" short_url FROM results"
|
||||
" short_url, extra FROM results"
|
||||
" WHERE channel = ? AND alert = ? ORDER BY id DESC LIMIT ?",
|
||||
(channel, name, limit),
|
||||
).fetchall()
|
||||
@@ -2013,9 +2098,12 @@ async def cmd_alert(bot, message):
|
||||
loop = asyncio.get_running_loop()
|
||||
fp = bot.registry._modules.get("flaskpaste")
|
||||
history_lines = []
|
||||
for row_id, backend, title, url, date, found_at, short_id, short_url in reversed(rows):
|
||||
for (row_id, backend, title, url, date, found_at,
|
||||
short_id, short_url, extra) in reversed(rows):
|
||||
ts = found_at[:10]
|
||||
title = _truncate(title) if title else "(no title)"
|
||||
if extra:
|
||||
title = f"{title} | {extra}"
|
||||
display_url = short_url or url
|
||||
if fp and url and not short_url:
|
||||
try:
|
||||
@@ -2050,15 +2138,19 @@ async def cmd_alert(bot, message):
|
||||
channel = message.target
|
||||
db = _db()
|
||||
row = db.execute(
|
||||
"SELECT alert, backend, title, url, date, found_at, short_id"
|
||||
"SELECT alert, backend, title, url, date, found_at, short_id,"
|
||||
" extra"
|
||||
" FROM results WHERE short_id = ? AND channel = ? LIMIT 1",
|
||||
(short_id, channel),
|
||||
).fetchone()
|
||||
if not row:
|
||||
await bot.reply(message, f"No result with id '{short_id}'")
|
||||
return
|
||||
alert, backend, title, url, date, found_at, sid = row
|
||||
await bot.reply(message, f"[{alert}/{backend}/{sid}] {title or '(no title)'}")
|
||||
alert, backend, title, url, date, found_at, sid, extra = row
|
||||
display = title or "(no title)"
|
||||
if extra:
|
||||
display = f"{display} | {extra}"
|
||||
await bot.reply(message, f"[{alert}/{backend}/{sid}] {display}")
|
||||
if url:
|
||||
await bot.reply(message, url)
|
||||
await bot.reply(
|
||||
|
||||
@@ -135,6 +135,21 @@ def _fetch_feed(url: str, etag: str = "", last_modified: str = "") -> dict:
|
||||
|
||||
# -- Feed parsing ------------------------------------------------------------
|
||||
|
||||
def _parse_date(raw: str) -> str:
|
||||
"""Try to extract a YYYY-MM-DD date from a raw date string."""
|
||||
import re as _re
|
||||
m = _re.search(r"\d{4}-\d{2}-\d{2}", raw)
|
||||
if m:
|
||||
return m.group(0)
|
||||
# Try RFC 2822 (common in RSS pubDate)
|
||||
from email.utils import parsedate_to_datetime
|
||||
try:
|
||||
dt = parsedate_to_datetime(raw)
|
||||
return dt.strftime("%Y-%m-%d")
|
||||
except (ValueError, TypeError):
|
||||
return ""
|
||||
|
||||
|
||||
def _parse_rss(root: ET.Element) -> tuple[str, list[dict]]:
|
||||
"""Parse RSS 2.0 feed."""
|
||||
channel = root.find("channel")
|
||||
@@ -146,8 +161,13 @@ def _parse_rss(root: ET.Element) -> tuple[str, list[dict]]:
|
||||
item_id = item.findtext("guid") or item.findtext("link") or ""
|
||||
item_title = (item.findtext("title") or "").strip()
|
||||
item_link = (item.findtext("link") or "").strip()
|
||||
pub_date = (item.findtext("pubDate") or "").strip()
|
||||
date = _parse_date(pub_date) if pub_date else ""
|
||||
if item_id:
|
||||
items.append({"id": item_id, "title": item_title, "link": item_link})
|
||||
items.append({
|
||||
"id": item_id, "title": item_title,
|
||||
"link": item_link, "date": date,
|
||||
})
|
||||
return (title, items)
|
||||
|
||||
|
||||
@@ -162,8 +182,14 @@ def _parse_atom(root: ET.Element) -> tuple[str, list[dict]]:
|
||||
if not entry_id:
|
||||
entry_id = entry_link
|
||||
entry_title = (entry.findtext(f"{_ATOM_NS}title") or "").strip()
|
||||
published = (entry.findtext(f"{_ATOM_NS}published") or "").strip()
|
||||
updated = (entry.findtext(f"{_ATOM_NS}updated") or "").strip()
|
||||
date = _parse_date(published or updated)
|
||||
if entry_id:
|
||||
items.append({"id": entry_id, "title": entry_title, "link": entry_link})
|
||||
items.append({
|
||||
"id": entry_id, "title": entry_title,
|
||||
"link": entry_link, "date": date,
|
||||
})
|
||||
return (title, items)
|
||||
|
||||
|
||||
@@ -246,7 +272,10 @@ async def _poll_once(bot, key: str, announce: bool = True) -> None:
|
||||
for item in shown:
|
||||
title = _truncate(item["title"]) if item["title"] else "(no title)"
|
||||
link = item["link"]
|
||||
date = item.get("date", "")
|
||||
line = f"[{name}] {title}"
|
||||
if date:
|
||||
line += f" | {date}"
|
||||
if link:
|
||||
line += f" -- {link}"
|
||||
await bot.send(channel, line)
|
||||
|
||||
@@ -49,6 +49,15 @@ def _truncate(text: str, max_len: int = _MAX_TITLE_LEN) -> str:
|
||||
return text[: max_len - 3].rstrip() + "..."
|
||||
|
||||
|
||||
def _compact_num(n: int) -> str:
|
||||
"""Format large numbers compactly: 1234 -> 1.2k, 1234567 -> 1.2M."""
|
||||
if n >= 1_000_000:
|
||||
return f"{n / 1_000_000:.1f}M".replace(".0M", "M")
|
||||
if n >= 1_000:
|
||||
return f"{n / 1_000:.1f}k".replace(".0k", "k")
|
||||
return str(n)
|
||||
|
||||
|
||||
# -- Blocking helpers (for executor) -----------------------------------------
|
||||
|
||||
def _query_stream(login: str) -> dict:
|
||||
@@ -172,15 +181,19 @@ async def _poll_once(bot, key: str, announce: bool = True) -> None:
|
||||
new_stream_id = result["stream_id"]
|
||||
data["last_title"] = result["title"]
|
||||
data["last_game"] = result["game"]
|
||||
data["last_viewers"] = result["viewers"]
|
||||
|
||||
if announce and (not was_live or new_stream_id != old_stream_id):
|
||||
channel = data["channel"]
|
||||
name = data["name"]
|
||||
title = _truncate(result["title"]) if result["title"] else "(no title)"
|
||||
game = result["game"]
|
||||
viewers = result["viewers"]
|
||||
line = f"[{name}] is live: {title}"
|
||||
if game:
|
||||
line += f" ({game})"
|
||||
if viewers:
|
||||
line += f" | {_compact_num(viewers)} viewers"
|
||||
line += f" -- https://twitch.tv/{login}"
|
||||
await bot.send(channel, line)
|
||||
|
||||
@@ -286,7 +299,13 @@ async def cmd_twitch(bot, message):
|
||||
if err:
|
||||
streamers.append(f"{name} (error)")
|
||||
elif live:
|
||||
streamers.append(f"{name} (live)")
|
||||
viewers = data.get("last_viewers", 0)
|
||||
if viewers:
|
||||
streamers.append(
|
||||
f"{name} (live, {_compact_num(viewers)})"
|
||||
)
|
||||
else:
|
||||
streamers.append(f"{name} (live)")
|
||||
else:
|
||||
streamers.append(name)
|
||||
if not streamers:
|
||||
@@ -318,9 +337,12 @@ async def cmd_twitch(bot, message):
|
||||
elif data.get("was_live"):
|
||||
title = _truncate(data.get("last_title", ""))
|
||||
game = data.get("last_game", "")
|
||||
viewers = data.get("last_viewers", 0)
|
||||
line = f"{name}: live -- {title}"
|
||||
if game:
|
||||
line += f" ({game})"
|
||||
if viewers:
|
||||
line += f" | {_compact_num(viewers)} viewers"
|
||||
await bot.reply(message, line)
|
||||
else:
|
||||
await bot.reply(message, f"{name}: offline")
|
||||
|
||||
@@ -27,6 +27,7 @@ _YT_PLAYER_URL = "https://www.youtube.com/youtubei/v1/player"
|
||||
_YT_CLIENT_VERSION = "2.20250101.00.00"
|
||||
_ATOM_NS = "{http://www.w3.org/2005/Atom}"
|
||||
_YT_NS = "{http://www.youtube.com/xml/schemas/2015}"
|
||||
_MEDIA_NS = "{http://search.yahoo.com/mrss/}"
|
||||
_MAX_SEEN = 200
|
||||
_MAX_ANNOUNCE = 5
|
||||
_DEFAULT_INTERVAL = 600
|
||||
@@ -74,6 +75,15 @@ def _truncate(text: str, max_len: int = _MAX_TITLE_LEN) -> str:
|
||||
return text[: max_len - 3].rstrip() + "..."
|
||||
|
||||
|
||||
def _compact_num(n: int) -> str:
|
||||
"""Format large numbers compactly: 1234 -> 1.2k, 1234567 -> 1.2M."""
|
||||
if n >= 1_000_000:
|
||||
return f"{n / 1_000_000:.1f}M".replace(".0M", "M")
|
||||
if n >= 1_000:
|
||||
return f"{n / 1_000:.1f}k".replace(".0k", "k")
|
||||
return str(n)
|
||||
|
||||
|
||||
def _is_youtube_url(url: str) -> bool:
|
||||
"""Check if URL is a YouTube domain."""
|
||||
try:
|
||||
@@ -213,8 +223,33 @@ def _parse_feed(body: bytes) -> tuple[str, list[dict]]:
|
||||
link = (link_el.get("href", "") if link_el is not None else "").strip()
|
||||
if not entry_id:
|
||||
entry_id = link
|
||||
# Published date
|
||||
published = (entry.findtext(f"{_ATOM_NS}published") or "").strip()
|
||||
date = published[:10] if len(published) >= 10 else ""
|
||||
# media:statistics views + media:starRating count (likes)
|
||||
views = 0
|
||||
likes = 0
|
||||
group = entry.find(f"{_MEDIA_NS}group")
|
||||
if group is not None:
|
||||
community = group.find(f"{_MEDIA_NS}community")
|
||||
if community is not None:
|
||||
stats_el = community.find(f"{_MEDIA_NS}statistics")
|
||||
if stats_el is not None:
|
||||
try:
|
||||
views = int(stats_el.get("views", "0"))
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
rating_el = community.find(f"{_MEDIA_NS}starRating")
|
||||
if rating_el is not None:
|
||||
try:
|
||||
likes = int(rating_el.get("count", "0"))
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
if entry_id:
|
||||
items.append({"id": entry_id, "title": entry_title, "link": link})
|
||||
items.append({
|
||||
"id": entry_id, "title": entry_title, "link": link,
|
||||
"date": date, "views": views, "likes": likes,
|
||||
})
|
||||
return (channel_name, items)
|
||||
|
||||
|
||||
@@ -305,7 +340,21 @@ async def _poll_once(bot, key: str, announce: bool = True) -> None:
|
||||
for item in shown:
|
||||
title = _truncate(item["title"]) if item["title"] else "(no title)"
|
||||
link = item["link"]
|
||||
# Build metadata suffix
|
||||
parts = []
|
||||
views = item.get("views", 0)
|
||||
likes = item.get("likes", 0)
|
||||
if views:
|
||||
parts.append(f"{_compact_num(views)}v")
|
||||
if likes:
|
||||
parts.append(f"{_compact_num(likes)}lk")
|
||||
date = item.get("date", "")
|
||||
if date:
|
||||
parts.append(date)
|
||||
extra = " ".join(parts)
|
||||
line = f"[{name}] {title}"
|
||||
if extra:
|
||||
line += f" | {extra}"
|
||||
if link:
|
||||
line += f" -- {link}"
|
||||
await bot.send(channel, line)
|
||||
|
||||
@@ -19,6 +19,7 @@ _spec.loader.exec_module(_mod)
|
||||
|
||||
from plugins.alert import ( # noqa: E402
|
||||
_MAX_SEEN,
|
||||
_compact_num,
|
||||
_delete,
|
||||
_errors,
|
||||
_extract_videos,
|
||||
@@ -27,6 +28,7 @@ from plugins.alert import ( # noqa: E402
|
||||
_pollers,
|
||||
_restore,
|
||||
_save,
|
||||
_save_result,
|
||||
_search_searx,
|
||||
_search_twitch,
|
||||
_search_youtube,
|
||||
@@ -179,6 +181,10 @@ class _FakeBot:
|
||||
async def reply(self, message, text: str) -> None:
|
||||
self.replied.append(text)
|
||||
|
||||
async def long_reply(self, message, lines, *, label: str = "") -> None:
|
||||
for line in lines:
|
||||
self.replied.append(line)
|
||||
|
||||
def _is_admin(self, message) -> bool:
|
||||
return self._admin
|
||||
|
||||
@@ -223,9 +229,9 @@ def _fake_tw(keyword):
|
||||
"""Fake Twitch backend returning two results (keyword in title)."""
|
||||
return [
|
||||
{"id": "stream:tw1", "title": "TW test Stream 1",
|
||||
"url": "https://twitch.tv/user1", "extra": ""},
|
||||
"url": "https://twitch.tv/user1", "extra": "500 viewers"},
|
||||
{"id": "vod:tw2", "title": "TW test VOD 1",
|
||||
"url": "https://twitch.tv/videos/tw2", "extra": ""},
|
||||
"url": "https://twitch.tv/videos/tw2", "extra": "1k views"},
|
||||
]
|
||||
|
||||
|
||||
@@ -322,6 +328,36 @@ class TestTruncate:
|
||||
assert not result.endswith(" ...")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestCompactNum
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCompactNum:
|
||||
def test_zero(self):
|
||||
assert _compact_num(0) == "0"
|
||||
|
||||
def test_small(self):
|
||||
assert _compact_num(999) == "999"
|
||||
|
||||
def test_one_k(self):
|
||||
assert _compact_num(1000) == "1k"
|
||||
|
||||
def test_one_point_five_k(self):
|
||||
assert _compact_num(1500) == "1.5k"
|
||||
|
||||
def test_one_m(self):
|
||||
assert _compact_num(1000000) == "1M"
|
||||
|
||||
def test_two_point_five_m(self):
|
||||
assert _compact_num(2500000) == "2.5M"
|
||||
|
||||
def test_exact_boundary(self):
|
||||
assert _compact_num(10000) == "10k"
|
||||
|
||||
def test_large_millions(self):
|
||||
assert _compact_num(12300000) == "12.3M"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestExtractVideos
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -873,6 +909,13 @@ class TestPollOnce:
|
||||
assert "[poll/yt/" in actions[0]
|
||||
assert "[poll/tw/" in actions[2]
|
||||
assert "[poll/sx/" in actions[4]
|
||||
# Twitch fakes have extra metadata; verify it appears in titles
|
||||
tw_titles = [s for t, s in bot.sent if t == "#test" and "TW test" in s]
|
||||
assert any("| 500 viewers" in t for t in tw_titles)
|
||||
assert any("| 1k views" in t for t in tw_titles)
|
||||
# YouTube fakes have no extra; verify no pipe suffix
|
||||
yt_titles = [s for t, s in bot.sent if t == "#test" and "YT test" in s]
|
||||
assert all("|" not in t for t in yt_titles)
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -1239,3 +1282,103 @@ class TestSearchSearx:
|
||||
with patch("urllib.request.urlopen", side_effect=ConnectionError("fail")):
|
||||
results = _search_searx("test")
|
||||
assert results == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestExtraInHistory
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestExtraInHistory:
|
||||
def test_history_shows_extra(self):
|
||||
"""History output includes | extra suffix when extra is non-empty."""
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
data = {
|
||||
"keyword": "test", "name": "hist", "channel": "#test",
|
||||
"interval": 300, "seen": {}, "last_poll": "", "last_error": "",
|
||||
}
|
||||
_save(bot, "#test:hist", data)
|
||||
# Insert a result with extra metadata
|
||||
_save_result("#test", "hist", "hn", {
|
||||
"id": "h1", "title": "Cool HN Post", "url": "https://hn.example.com/1",
|
||||
"date": "2026-01-15", "extra": "42pt 10c",
|
||||
})
|
||||
|
||||
async def inner():
|
||||
await cmd_alert(bot, _msg("!alert history hist"))
|
||||
assert len(bot.replied) >= 1
|
||||
found = any("| 42pt 10c" in line for line in bot.replied)
|
||||
assert found, f"Expected extra in history, got: {bot.replied}"
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
def test_history_no_extra(self):
|
||||
"""History output has no pipe when extra is empty."""
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
data = {
|
||||
"keyword": "test", "name": "hist2", "channel": "#test",
|
||||
"interval": 300, "seen": {}, "last_poll": "", "last_error": "",
|
||||
}
|
||||
_save(bot, "#test:hist2", data)
|
||||
_save_result("#test", "hist2", "yt", {
|
||||
"id": "y1", "title": "Plain Video", "url": "https://yt.example.com/1",
|
||||
"date": "", "extra": "",
|
||||
})
|
||||
|
||||
async def inner():
|
||||
await cmd_alert(bot, _msg("!alert history hist2"))
|
||||
assert len(bot.replied) >= 1
|
||||
assert all("|" not in line for line in bot.replied)
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestExtraInInfo
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestExtraInInfo:
|
||||
def test_info_shows_extra(self):
|
||||
"""Info output includes | extra suffix when extra is non-empty."""
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
data = {
|
||||
"keyword": "test", "name": "inf", "channel": "#test",
|
||||
"interval": 300, "seen": {}, "last_poll": "", "last_error": "",
|
||||
}
|
||||
_save(bot, "#test:inf", data)
|
||||
short_id = _save_result("#test", "inf", "gh", {
|
||||
"id": "g1", "title": "cool/repo: A cool project",
|
||||
"url": "https://github.com/cool/repo",
|
||||
"date": "2026-01-10", "extra": "42* 5fk",
|
||||
})
|
||||
|
||||
async def inner():
|
||||
await cmd_alert(bot, _msg(f"!alert info {short_id}"))
|
||||
assert len(bot.replied) >= 1
|
||||
assert "| 42* 5fk" in bot.replied[0]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
def test_info_no_extra(self):
|
||||
"""Info output has no pipe when extra is empty."""
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
data = {
|
||||
"keyword": "test", "name": "inf2", "channel": "#test",
|
||||
"interval": 300, "seen": {}, "last_poll": "", "last_error": "",
|
||||
}
|
||||
_save(bot, "#test:inf2", data)
|
||||
short_id = _save_result("#test", "inf2", "yt", {
|
||||
"id": "y2", "title": "Some Video",
|
||||
"url": "https://youtube.com/watch?v=y2",
|
||||
"date": "", "extra": "",
|
||||
})
|
||||
|
||||
async def inner():
|
||||
await cmd_alert(bot, _msg(f"!alert info {short_id}"))
|
||||
assert len(bot.replied) >= 1
|
||||
assert "|" not in bot.replied[0]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -25,6 +25,7 @@ from plugins.rss import ( # noqa: E402
|
||||
_feeds,
|
||||
_load,
|
||||
_parse_atom,
|
||||
_parse_date,
|
||||
_parse_feed,
|
||||
_parse_rss,
|
||||
_poll_once,
|
||||
@@ -52,16 +53,19 @@ RSS_FEED = b"""\
|
||||
<guid>item-1</guid>
|
||||
<title>First Post</title>
|
||||
<link>https://example.com/1</link>
|
||||
<pubDate>Mon, 10 Feb 2026 09:00:00 +0000</pubDate>
|
||||
</item>
|
||||
<item>
|
||||
<guid>item-2</guid>
|
||||
<title>Second Post</title>
|
||||
<link>https://example.com/2</link>
|
||||
<pubDate>Tue, 11 Feb 2026 14:30:00 +0000</pubDate>
|
||||
</item>
|
||||
<item>
|
||||
<guid>item-3</guid>
|
||||
<title>Third Post</title>
|
||||
<link>https://example.com/3</link>
|
||||
<pubDate>Wed, 12 Feb 2026 08:00:00 +0000</pubDate>
|
||||
</item>
|
||||
</channel>
|
||||
</rss>
|
||||
@@ -88,11 +92,13 @@ ATOM_FEED = b"""\
|
||||
<id>atom-1</id>
|
||||
<title>Atom First</title>
|
||||
<link href="https://example.com/a1"/>
|
||||
<published>2026-02-15T10:00:00Z</published>
|
||||
</entry>
|
||||
<entry>
|
||||
<id>atom-2</id>
|
||||
<title>Atom Second</title>
|
||||
<link href="https://example.com/a2"/>
|
||||
<published>2026-02-16T15:30:00Z</published>
|
||||
</entry>
|
||||
</feed>
|
||||
"""
|
||||
@@ -333,6 +339,20 @@ class TestParseRSS:
|
||||
assert items[0]["title"] == "First Post"
|
||||
assert items[0]["link"] == "https://example.com/1"
|
||||
|
||||
def test_parses_pubdate(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(RSS_FEED)
|
||||
_, items = _parse_rss(root)
|
||||
assert items[0]["date"] == "2026-02-10"
|
||||
assert items[1]["date"] == "2026-02-11"
|
||||
assert items[2]["date"] == "2026-02-12"
|
||||
|
||||
def test_no_pubdate_empty_string(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(RSS_NO_GUID)
|
||||
_, items = _parse_rss(root)
|
||||
assert items[0]["date"] == ""
|
||||
|
||||
def test_fallback_to_link_as_id(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(RSS_NO_GUID)
|
||||
@@ -364,6 +384,19 @@ class TestParseAtom:
|
||||
assert items[0]["title"] == "Atom First"
|
||||
assert items[0]["link"] == "https://example.com/a1"
|
||||
|
||||
def test_parses_published_date(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(ATOM_FEED)
|
||||
_, items = _parse_atom(root)
|
||||
assert items[0]["date"] == "2026-02-15"
|
||||
assert items[1]["date"] == "2026-02-16"
|
||||
|
||||
def test_no_published_empty_string(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(ATOM_NO_ID)
|
||||
_, items = _parse_atom(root)
|
||||
assert items[0]["date"] == ""
|
||||
|
||||
def test_fallback_to_link_as_id(self):
|
||||
import xml.etree.ElementTree as ET
|
||||
root = ET.fromstring(ATOM_NO_ID)
|
||||
@@ -755,6 +788,9 @@ class TestCmdRssCheck:
|
||||
assert len(announcements) == 2
|
||||
assert "[news]" in announcements[0]
|
||||
assert "Second Post" in announcements[0]
|
||||
# Verify date suffix
|
||||
assert "| 2026-02-11" in announcements[0]
|
||||
assert "| 2026-02-12" in announcements[1]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -1073,3 +1109,27 @@ class TestCmdRssUsage:
|
||||
bot = _FakeBot()
|
||||
asyncio.run(cmd_rss(bot, _msg("!rss foobar")))
|
||||
assert "Usage:" in bot.replied[0]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestParseDate
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestParseDate:
|
||||
def test_iso_format(self):
|
||||
assert _parse_date("2026-02-15T10:00:00Z") == "2026-02-15"
|
||||
|
||||
def test_iso_with_offset(self):
|
||||
assert _parse_date("2026-02-15T10:00:00+00:00") == "2026-02-15"
|
||||
|
||||
def test_rfc2822_format(self):
|
||||
assert _parse_date("Mon, 10 Feb 2026 09:00:00 +0000") == "2026-02-10"
|
||||
|
||||
def test_empty_string(self):
|
||||
assert _parse_date("") == ""
|
||||
|
||||
def test_garbage(self):
|
||||
assert _parse_date("not a date") == ""
|
||||
|
||||
def test_date_only(self):
|
||||
assert _parse_date("2026-01-01") == "2026-01-01"
|
||||
|
||||
@@ -17,6 +17,7 @@ sys.modules[_spec.name] = _mod
|
||||
_spec.loader.exec_module(_mod)
|
||||
|
||||
from plugins.twitch import ( # noqa: E402
|
||||
_compact_num,
|
||||
_delete,
|
||||
_errors,
|
||||
_load,
|
||||
@@ -652,6 +653,17 @@ class TestCmdTwitchList:
|
||||
assert "broken (error)" in bot.replied[0]
|
||||
|
||||
def test_list_shows_live(self):
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
_save(bot, "#test:xqc", {
|
||||
"name": "xqc", "channel": "#test",
|
||||
"last_error": "", "was_live": True,
|
||||
"last_viewers": 50000,
|
||||
})
|
||||
asyncio.run(cmd_twitch(bot, _msg("!twitch list")))
|
||||
assert "xqc (live, 50k)" in bot.replied[0]
|
||||
|
||||
def test_list_shows_live_no_viewers(self):
|
||||
_clear()
|
||||
bot = _FakeBot()
|
||||
_save(bot, "#test:xqc", {
|
||||
@@ -725,8 +737,10 @@ class TestCmdTwitchCheck:
|
||||
assert len(announcements) == 1
|
||||
assert "[xqc] is live" in announcements[0]
|
||||
assert "Fortnite" in announcements[0]
|
||||
# Check reply shows live status
|
||||
assert "| 50k viewers" in announcements[0]
|
||||
# Check reply shows live status with viewers
|
||||
assert "xqc: live" in bot.replied[0]
|
||||
assert "| 50k viewers" in bot.replied[0]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -794,6 +808,7 @@ class TestPollOnce:
|
||||
assert "[xqc] is live" in messages[0]
|
||||
assert "Playing games" in messages[0]
|
||||
assert "Fortnite" in messages[0]
|
||||
assert "| 50k viewers" in messages[0]
|
||||
assert "https://twitch.tv/xqc" in messages[0]
|
||||
updated = _load(bot, key)
|
||||
assert updated["was_live"] is True
|
||||
@@ -941,6 +956,7 @@ class TestPollOnce:
|
||||
assert len(messages) == 1
|
||||
assert "Just chatting" in messages[0]
|
||||
assert "(" not in messages[0] # No game parenthetical
|
||||
assert "| 100 viewers" in messages[0]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -1132,3 +1148,33 @@ class TestCmdTwitchUsage:
|
||||
bot = _FakeBot()
|
||||
asyncio.run(cmd_twitch(bot, _msg("!twitch foobar")))
|
||||
assert "Usage:" in bot.replied[0]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestCompactNum
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCompactNum:
|
||||
def test_zero(self):
|
||||
assert _compact_num(0) == "0"
|
||||
|
||||
def test_small(self):
|
||||
assert _compact_num(999) == "999"
|
||||
|
||||
def test_one_k(self):
|
||||
assert _compact_num(1000) == "1k"
|
||||
|
||||
def test_fractional_k(self):
|
||||
assert _compact_num(1500) == "1.5k"
|
||||
|
||||
def test_one_m(self):
|
||||
assert _compact_num(1_000_000) == "1M"
|
||||
|
||||
def test_fractional_m(self):
|
||||
assert _compact_num(2_500_000) == "2.5M"
|
||||
|
||||
def test_fifty_k(self):
|
||||
assert _compact_num(50000) == "50k"
|
||||
|
||||
def test_hundred(self):
|
||||
assert _compact_num(100) == "100"
|
||||
|
||||
@@ -19,6 +19,7 @@ _spec.loader.exec_module(_mod)
|
||||
from plugins.youtube import ( # noqa: E402
|
||||
_MAX_ANNOUNCE,
|
||||
_channels,
|
||||
_compact_num,
|
||||
_delete,
|
||||
_derive_name,
|
||||
_errors,
|
||||
@@ -44,26 +45,48 @@ from plugins.youtube import ( # noqa: E402
|
||||
YT_ATOM_FEED = b"""\
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<feed xmlns:yt="http://www.youtube.com/xml/schemas/2015"
|
||||
xmlns="http://www.w3.org/2005/Atom">
|
||||
xmlns="http://www.w3.org/2005/Atom"
|
||||
xmlns:media="http://search.yahoo.com/mrss/">
|
||||
<title>3Blue1Brown - Videos</title>
|
||||
<author><name>3Blue1Brown</name></author>
|
||||
<entry>
|
||||
<id>yt:video:abc123</id>
|
||||
<yt:videoId>abc123</yt:videoId>
|
||||
<title>Linear Algebra</title>
|
||||
<published>2026-01-15T12:00:00+00:00</published>
|
||||
<link rel="alternate" href="https://www.youtube.com/watch?v=abc123"/>
|
||||
<media:group>
|
||||
<media:community>
|
||||
<media:statistics views="1500000"/>
|
||||
<media:starRating count="45000"/>
|
||||
</media:community>
|
||||
</media:group>
|
||||
</entry>
|
||||
<entry>
|
||||
<id>yt:video:def456</id>
|
||||
<yt:videoId>def456</yt:videoId>
|
||||
<title>Calculus</title>
|
||||
<published>2026-02-01T08:30:00+00:00</published>
|
||||
<link rel="alternate" href="https://www.youtube.com/watch?v=def456"/>
|
||||
<media:group>
|
||||
<media:community>
|
||||
<media:statistics views="820000"/>
|
||||
<media:starRating count="32000"/>
|
||||
</media:community>
|
||||
</media:group>
|
||||
</entry>
|
||||
<entry>
|
||||
<id>yt:video:ghi789</id>
|
||||
<yt:videoId>ghi789</yt:videoId>
|
||||
<title>Neural Networks</title>
|
||||
<published>2026-02-10T14:00:00+00:00</published>
|
||||
<link rel="alternate" href="https://www.youtube.com/watch?v=ghi789"/>
|
||||
<media:group>
|
||||
<media:community>
|
||||
<media:statistics views="250000"/>
|
||||
<media:starRating count="12000"/>
|
||||
</media:community>
|
||||
</media:group>
|
||||
</entry>
|
||||
</feed>
|
||||
"""
|
||||
@@ -362,6 +385,28 @@ class TestParseFeed:
|
||||
channel_name, _ = _parse_feed(YT_ATOM_FEED)
|
||||
assert channel_name == "3Blue1Brown"
|
||||
|
||||
def test_parses_published_date(self):
|
||||
_, items = _parse_feed(YT_ATOM_FEED)
|
||||
assert items[0]["date"] == "2026-01-15"
|
||||
assert items[1]["date"] == "2026-02-01"
|
||||
assert items[2]["date"] == "2026-02-10"
|
||||
|
||||
def test_parses_views(self):
|
||||
_, items = _parse_feed(YT_ATOM_FEED)
|
||||
assert items[0]["views"] == 1500000
|
||||
assert items[1]["views"] == 820000
|
||||
|
||||
def test_parses_likes(self):
|
||||
_, items = _parse_feed(YT_ATOM_FEED)
|
||||
assert items[0]["likes"] == 45000
|
||||
assert items[1]["likes"] == 32000
|
||||
|
||||
def test_no_media_defaults_zero(self):
|
||||
_, items = _parse_feed(YT_ATOM_NO_VIDEOID)
|
||||
assert items[0]["views"] == 0
|
||||
assert items[0]["likes"] == 0
|
||||
assert items[0]["date"] == ""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestResolveChannel
|
||||
@@ -789,6 +834,11 @@ class TestCmdYtCheck:
|
||||
assert len(announcements) == 2
|
||||
assert "[news]" in announcements[0]
|
||||
assert "Calculus" in announcements[0]
|
||||
# Verify metadata suffix (views, likes, date)
|
||||
assert "| " in announcements[0]
|
||||
assert "820kv" in announcements[0]
|
||||
assert "32klk" in announcements[0]
|
||||
assert "2026-02-01" in announcements[0]
|
||||
|
||||
asyncio.run(inner())
|
||||
|
||||
@@ -1103,3 +1153,27 @@ class TestCmdYtUsage:
|
||||
bot = _FakeBot()
|
||||
asyncio.run(cmd_yt(bot, _msg("!yt foobar")))
|
||||
assert "Usage:" in bot.replied[0]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TestCompactNum
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCompactNum:
|
||||
def test_zero(self):
|
||||
assert _compact_num(0) == "0"
|
||||
|
||||
def test_small(self):
|
||||
assert _compact_num(999) == "999"
|
||||
|
||||
def test_one_k(self):
|
||||
assert _compact_num(1000) == "1k"
|
||||
|
||||
def test_fractional_k(self):
|
||||
assert _compact_num(1500) == "1.5k"
|
||||
|
||||
def test_one_m(self):
|
||||
assert _compact_num(1_000_000) == "1M"
|
||||
|
||||
def test_fractional_m(self):
|
||||
assert _compact_num(2_500_000) == "2.5M"
|
||||
|
||||
Reference in New Issue
Block a user