Dockerfile.test builds production image with pytest baked in.
compose.test.yml mounts source as volume for fast iteration.
Usage: podman-compose -f compose.test.yml run --rm test
- threading.local() caches proxy_db and url_db per greenlet (eliminates
~2.7k redundant sqlite3.connect + PRAGMA calls per session on odin)
- ASN database now lazy-loaded on first lookup (defers ~3.6s startup cost)
- URL claim error penalty increased from 0.3*error(cap 2) to 0.5*error(cap 4)
and stale penalty from 0.1*stale(cap 1) to 0.2*stale(cap 1.5) to reduce
worker cycles wasted on erroring URLs (71% of 7,158 URLs erroring)
Boost SOCKS sources in claim_urls scoring when SOCKS proxies
are underrepresented (<40% of pool). Dynamic 0-1.0 boost based
on current protocol distribution.
Track connection state with _connected flag. Only call
socket.shutdown() on successfully connected sockets.
Saves ~39s/session on workers (974k disconnect calls).
Skip expensive regex scans when content lacks required markers:
- extract_auth_proxies: skip if no '@' in content
- extract_proxies_from_table: skip if no '<table' tag
- extract_proxies_from_json: skip if no '{' or '['
- Hoist table regexes to module-level precompiled constants
Add urls section with total/healthy/dead/erroring counts, fetch
activity, productive source count, aggregate yield, and top sources
ranked by working_ratio.
Load pyasn database in httpd and look up ASN when workers report
working proxies. Falls back to a pure-Python ipasn.dat reader when
the pyasn C extension is unavailable (Python 2.7 containers).
Backfills existing proxies with null ASN on startup.
Seed sources that error out are permanently excluded from claiming.
Over time this starves the pipeline. Re-seed every 6 hours with
error reset for exhausted sources, preventing the starvation loop
that caused the previous outage.
Serving endpoints filter by last_seen >= now - 3600, but watchd
never set last_seen -- only worker reports did. This caused the
API to return 0 proxies despite 70+ passing verification.
Generalizes JudgeStats into TargetStats with cooldown-based filtering
for head targets, SSL targets, and IRC servers. Targets that repeatedly
block or fail are temporarily avoided, preventing unfair proxy failures
when a target goes down. Exposes per-pool health via /api/stats.
Use SSL error reason from primary handshake to decide whether
the secondary check should use SSL or plain HTTP. Protocol errors
(proxy can't TLS) fall back to plain HTTP; other failures retry
with SSL sans cert verification.
Add --become to ansible_cmd (needed when connecting as
ansible user). Add cd /tmp to podman_cmd so sudo -u podman
doesn't fail on inaccessible /home/ansible cwd.
Parallel execution across hosts, handler-based restart on change,
role-aware paths via group_vars. Connects over WireGuard with
dedicated inventory and SSH key.