Files
ppf/TODO.md
2026-02-17 21:06:35 +01:00

2.4 KiB

PPF TODO

Optimization

[ ] JSON Stats Response Caching

  • Cache serialized JSON response with short TTL (1-2s)
  • Only regenerate when underlying stats change
  • Use ETag/If-None-Match for client-side caching
  • Savings: ~7-9s/hour. Low priority, only matters with frequent dashboard access.

[ ] Object Pooling for Test States

  • Pool ProxyTestState and TargetTestJob, reset and reuse
  • Savings: ~11-15s/hour. Not recommended - high effort, medium risk, modest gain.

[ ] SQLite Connection Reuse

  • Persistent connection per thread with health checks
  • Savings: ~0.3s/hour. Not recommended - negligible benefit.

Dashboard

[ ] Performance

  • Cache expensive DB queries (top countries, protocol breakdown)
  • Lazy-load historical data (only when scrolled into view)
  • WebSocket option for push updates (reduce polling overhead)
  • Configurable refresh interval via URL param or localStorage

[ ] Features

  • Historical graphs (24h, 7d) using stats_history table
  • Per-ASN performance analysis
  • Alert thresholds (success rate < X%, MITM detected)
  • Mobile-responsive improvements

Memory

  • Lock consolidation - reduce per-proxy locks (260k LockType objects)
  • Leaner state objects - reduce dict/list count per job

Memory scales linearly with queue (~4.5 KB/job). No leaks detected. Optimize only if memory becomes a constraint.


Deprecation

[ ] Remove V1 worker protocol

  • V2 workers (URL-driven) are the standard; no V1 workers remain active
  • Remove --worker flag and V1 code path in ppf.py
  • Remove /api/claim, /api/submit V1 endpoints in httpd.py
  • Remove V1 heartbeat/registration handling
  • Clean up any V1-specific state tracking in proxywatchd.py

Known Issues

[!] Podman Container Metadata Disappears

podman ps -a shows empty even though process is running. Service functions correctly despite missing metadata. Monitor via ss -tlnp, ps aux, or curl localhost:8081/health. Low impact.


Container Debugging Checklist

1. Check for orphans: ps aux | grep -E "[p]rocess_name"
2. Check port conflicts: ss -tlnp | grep PORT
3. Run foreground: podman run --rm (no -d) to see output
4. Check podman state: podman ps -a
5. Clean stale: pkill -9 -f "pattern" && podman rm -f -a
6. Verify deps: config files, data dirs, volumes exist
7. Check logs: podman logs container_name 2>&1 | tail -50
8. Health check: curl -sf http://localhost:PORT/health