chore: rename quicproquo → quicprochat in docs, Docker, CI, and packaging
Rename all project references from quicproquo/qpq to quicprochat/qpc across documentation, Docker configuration, CI workflows, packaging scripts, operational configs, and build tooling. - Docker: crate paths, binary names, user/group, data dirs, env vars - CI: workflow crate references, binary names, artifact names - Docs: all markdown files under docs/, SDK READMEs, book.toml - Packaging: OpenWrt Makefile, init script, UCI config (file renames) - Scripts: justfile, dev-shell, screenshot, cross-compile, ai_team - Operations: Prometheus config, alert rules, Grafana dashboard - Config: .env.example (QPQ_* → QPC_*), CODEOWNERS paths - Top-level: README, CONTRIBUTING, ROADMAP, CLAUDE.md
This commit is contained in:
@@ -1,10 +1,10 @@
|
||||
# Scaling Guide
|
||||
|
||||
This document covers resource sizing, scaling triggers, and capacity planning for quicproquo deployments.
|
||||
This document covers resource sizing, scaling triggers, and capacity planning for quicprochat deployments.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
quicproquo runs as a single-process server handling QUIC connections. Key resource consumers:
|
||||
quicprochat runs as a single-process server handling QUIC connections. Key resource consumers:
|
||||
|
||||
- **CPU**: TLS 1.3 handshakes (QUIC), OPAQUE PAKE authentication, message routing
|
||||
- **Memory**: In-memory session state (DashMap), QUIC connection state, delivery waiters, rate limit entries
|
||||
@@ -70,7 +70,7 @@ The server is async (Tokio) and benefits from multiple cores. QUIC TLS handshake
|
||||
|
||||
```bash
|
||||
# Check current CPU usage
|
||||
top -bn1 -p $(pgrep qpq-server)
|
||||
top -bn1 -p $(pgrep qpc-server)
|
||||
|
||||
# For Docker: increase CPU limits
|
||||
# docker-compose.prod.yml:
|
||||
@@ -107,22 +107,22 @@ iostat -x 1 5
|
||||
|
||||
# Move to NVMe if on spinning disk
|
||||
# Increase WAL autocheckpoint threshold for burst writes
|
||||
sqlite3 data/qpq.db "PRAGMA key='${QPQ_DB_KEY}'; PRAGMA wal_autocheckpoint=2000;"
|
||||
sqlite3 data/qpc.db "PRAGMA key='${QPC_DB_KEY}'; PRAGMA wal_autocheckpoint=2000;"
|
||||
```
|
||||
|
||||
## Horizontal Scaling
|
||||
|
||||
quicproquo does not yet have built-in multi-node clustering. For horizontal scaling, use these patterns:
|
||||
quicprochat does not yet have built-in multi-node clustering. For horizontal scaling, use these patterns:
|
||||
|
||||
### Load Balancer (UDP/QUIC)
|
||||
|
||||
Place a UDP load balancer in front of multiple qpq-server instances. Each instance runs independently with its own database.
|
||||
Place a UDP load balancer in front of multiple qpc-server instances. Each instance runs independently with its own database.
|
||||
|
||||
```
|
||||
+-----------+
|
||||
clients ------> | L4 LB | ----> qpq-server-1 (db-1)
|
||||
| (UDP/QUIC)| ----> qpq-server-2 (db-2)
|
||||
+-----------+ qpq-server-3 (db-3)
|
||||
clients ------> | L4 LB | ----> qpc-server-1 (db-1)
|
||||
| (UDP/QUIC)| ----> qpc-server-2 (db-2)
|
||||
+-----------+ qpc-server-3 (db-3)
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
@@ -134,7 +134,7 @@ Place a UDP load balancer in front of multiple qpq-server instances. Each instan
|
||||
Enable federation to relay messages between nodes:
|
||||
|
||||
```toml
|
||||
# qpq-server.toml on node-1
|
||||
# qpc-server.toml on node-1
|
||||
[federation]
|
||||
enabled = true
|
||||
domain = "node1.chat.example.com"
|
||||
@@ -153,9 +153,9 @@ address = "10.0.1.2:7001"
|
||||
For true horizontal scaling, migrate from SQLCipher to a shared PostgreSQL instance. This is not yet implemented but is the planned approach for multi-node deployments.
|
||||
|
||||
```
|
||||
qpq-server-1 --\
|
||||
qpq-server-2 ---+--> PostgreSQL (shared)
|
||||
qpq-server-3 --/
|
||||
qpc-server-1 --\
|
||||
qpc-server-2 ---+--> PostgreSQL (shared)
|
||||
qpc-server-3 --/
|
||||
```
|
||||
|
||||
## Connection Tuning
|
||||
@@ -174,7 +174,7 @@ For high connection counts, consider:
|
||||
- Increasing UDP buffer sizes:
|
||||
|
||||
```bash
|
||||
# /etc/sysctl.d/99-qpq.conf
|
||||
# /etc/sysctl.d/99-qpc.conf
|
||||
net.core.rmem_max = 26214400
|
||||
net.core.wmem_max = 26214400
|
||||
net.core.rmem_default = 1048576
|
||||
@@ -182,7 +182,7 @@ net.core.wmem_default = 1048576
|
||||
```
|
||||
|
||||
```bash
|
||||
sysctl -p /etc/sysctl.d/99-qpq.conf
|
||||
sysctl -p /etc/sysctl.d/99-qpc.conf
|
||||
```
|
||||
|
||||
## Docker Resource Limits
|
||||
@@ -211,11 +211,11 @@ Use the included test infrastructure to benchmark:
|
||||
|
||||
```bash
|
||||
# Build the test client
|
||||
cargo build --release --bin qpq-client
|
||||
cargo build --release --bin qpc-client
|
||||
|
||||
# Run concurrent connection test (example)
|
||||
for i in $(seq 1 100); do
|
||||
qpq-client --server 127.0.0.1:7000 --auth-token "$QPQ_AUTH_TOKEN" &
|
||||
qpc-client --server 127.0.0.1:7000 --auth-token "$QPC_AUTH_TOKEN" &
|
||||
done
|
||||
wait
|
||||
|
||||
|
||||
Reference in New Issue
Block a user