feat: listener retry override, pool protocol filter, conn pool docs

- Per-listener `retries` overrides global default (0 = inherit)
- Pool-level `allowed_protos` filters proxies during merge
- Connection pooling documented in CHEATSHEET.md
- Both features exposed in /config and /status API responses
- 12 new tests (config parsing, API exposure, merge filtering)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
user
2026-02-21 20:35:14 +01:00
parent c1c92ddc39
commit 3593481b30
13 changed files with 674 additions and 120 deletions

View File

@@ -132,6 +132,37 @@ curl -x socks5h://alice:s3cret@127.0.0.1:1080 https://example.com
No `auth:` key = no authentication required (default).
## Listener Retry Override (config)
```yaml
listeners:
- listen: 0.0.0.0:1080
retries: 5 # override global retries
chain:
- socks5://127.0.0.1:9050
- pool
- listen: 0.0.0.0:1082
chain:
- socks5://127.0.0.1:9050 # 0 = use global default
```
Per-listener `retries` overrides the global `retries` setting. Set to 0 (or
omit) to inherit the global value.
## Pool Protocol Filter (config)
```yaml
proxy_pools:
socks_only:
allowed_protos: [socks5] # reject http proxies
sources:
- url: http://api:8081/proxies/all
```
When set, proxies not matching `allowed_protos` are silently dropped during
merge. Useful when a source returns mixed protocols but the pool should
only serve a specific type.
## Multi-Tor Round-Robin (config)
```yaml
@@ -150,6 +181,23 @@ pool_size: 8 # pre-warmed TCP conns to first hop (0 = off)
pool_max_idle: 30 # evict idle pooled conns (seconds)
```
## Connection Pool (config)
```yaml
pool_size: 8 # pre-warmed TCP connections per first hop (0 = off)
pool_max_idle: 30 # evict idle connections after N seconds
```
Pre-warms TCP connections to the first hop in the chain. Only the raw TCP
connection is pooled -- SOCKS/HTTP negotiation consumes it. One pool is
created per unique first hop (shared across listeners). Requires at least
one hop in `chain`.
| Setting | Default | Notes |
|---------|---------|-------|
| `pool_size` | 0 (off) | Connections per first hop |
| `pool_max_idle` | 30 | Idle eviction in seconds |
## Named Proxy Pools (config)
```yaml
@@ -229,7 +277,7 @@ http://user:pass@host:port
s5p --api 127.0.0.1:1081 -c config/s5p.yaml # enable API
curl -s http://127.0.0.1:1081/status | jq . # runtime status
curl -s http://127.0.0.1:1081/metrics | jq . # full metrics
curl -s http://127.0.0.1:1081/metrics # prometheus metrics
curl -s http://127.0.0.1:1081/pool | jq . # all proxies
curl -s http://127.0.0.1:1081/pool/alive | jq . # alive only
curl -s http://127.0.0.1:1081/config | jq . # current config
@@ -267,27 +315,26 @@ python -m pstats ~/.cache/s5p/s5p.prof # container profile output
metrics: conn=1842 ok=1790 fail=52 retries=67 active=3 in=50.0M out=1.0G rate=4.72/s p50=198.3ms p95=890.1ms up=1h01m01s pool=42/65
```
## Metrics JSON (`/metrics`)
## Prometheus Metrics (`/metrics`)
```bash
curl -s http://127.0.0.1:1081/metrics | jq .
curl -s http://127.0.0.1:1081/metrics
```
```json
{
"connections": 1842,
"success": 1790,
"rate": 4.72,
"latency": {"count": 1000, "min": 45.2, "max": 2841.7, "avg": 312.4, "p50": 198.3, "p95": 890.1, "p99": 1523.6},
"listener_latency": {
"0.0.0.0:1080": {"count": 500, "p50": 1800.2, "p95": 8200.1, "...": "..."},
"0.0.0.0:1081": {"count": 300, "p50": 1000.1, "p95": 3500.2, "...": "..."},
"0.0.0.0:1082": {"count": 200, "p50": 400.1, "p95": 1200.5, "...": "..."}
}
}
```
# TYPE s5p_connections counter
s5p_connections_total 1842
# TYPE s5p_active_connections gauge
s5p_active_connections 3
# TYPE s5p_pool_proxies_alive gauge
s5p_pool_proxies_alive{pool="clean"} 30
# TYPE s5p_chain_latency_seconds summary
s5p_chain_latency_seconds{quantile="0.5"} 0.198300
s5p_chain_latency_seconds{quantile="0.95"} 0.890100
# EOF
```
Per-listener latency also appears in `/status` under each listener entry.
OpenMetrics format. Use `/status` for JSON equivalent.
## Troubleshooting