docs: update getting-started and contributing docs for v2

Remove the capnp compiler requirement from prerequisites (protobuf-src
vendors protoc automatically). Update building.md for 9 crates and the
justfile commands. Rewrite running-the-server.md with accurate v2 flags
(--allow-insecure-auth, --sealed-sender, --plugin-dir, --ws-listen,
--webtransport-listen, --federation-enabled, QPQ_PRODUCTION). Update
docker.md to remove capnproto install from builder stage description.
Delete bot-sdk.md and generators.md (removed crates). Update testing.md
with the accurate 301-test breakdown across 9 crates and the AUTH_LOCK
note for E2E tests. Update coding-standards.md dependency table to list
prost as primary serialisation, capnp as legacy-only, and add opaque-ke.
This commit is contained in:
2026-03-04 22:00:23 +01:00
parent 189534c511
commit f7a7f672b4
8 changed files with 405 additions and 675 deletions

View File

@@ -16,8 +16,7 @@ in any merged code:
- Stub implementations or placeholder logic
- Mock objects in production code paths (mocks are acceptable only in test code)
- Commented-out code blocks
- `#[allow(unused)]` on production code (acceptable on generated code from
Cap'n Proto codegen)
- `#[allow(unused)]` on production code (acceptable on prost-generated code)
If a feature is out of scope for the current milestone, it is **explicitly
omitted** with a documented reason (in an ADR or code comment explaining why it
@@ -84,11 +83,11 @@ pub fn create_group(
### Error Handling
- No `unwrap()` or `expect()` on cryptographic operations. All crypto errors
must be typed and propagated.
- No `unwrap()` or `expect()` on cryptographic operations or I/O in non-test
paths. All crypto errors must be typed and propagated.
- Use `thiserror` for library error types (`quicproquo-core`,
`quicproquo-proto`) and `anyhow` for application-level error handling
(`quicproquo-server`, `quicproquo-client`).
`quicproquo-proto`, `quicproquo-rpc`, `quicproquo-sdk`) and `anyhow` for
application-level error handling (`quicproquo-server`, `quicproquo-client`).
- `unwrap()` is acceptable only in:
- Test code.
- Cases where the invariant is provably guaranteed by the type system
@@ -119,6 +118,8 @@ pub fn create_group(
runtime stage.
- The Docker image must build and run correctly after every merge to the main
branch.
- The builder stage does not install extra system packages for code generation
`protobuf-src` vendors `protoc` automatically.
---
@@ -131,21 +132,27 @@ updates are allowed; major version bumps require justification and review.
### Preferred Ecosystem
| Domain | Preferred Crate(s) |
|--------|-------------------|
| Classical crypto (signing) | `ed25519-dalek` |
| Classical crypto (key exchange) | `x25519-dalek` |
| MLS | `openmls`, `openmls_rust_crypto` |
| Post-quantum KEM | `ml-kem` |
| Serialisation / RPC | `capnp`, `capnp-rpc` |
| Async runtime | `tokio` |
| Zeroisation | `zeroize` |
| Domain | Preferred Crate(s) | Notes |
|--------|-------------------|-------|
| Classical crypto (signing) | `ed25519-dalek` | |
| Classical crypto (key exchange) | `x25519-dalek` | |
| OPAQUE authentication | `opaque-ke` (v4) | Ristretto255 + Argon2 |
| MLS | `openmls 0.5`, `openmls_rust_crypto` | RFC 9420 |
| Post-quantum KEM | `ml-kem` (ML-KEM-768, FIPS 203) | |
| Serialisation / RPC (v2) | `prost`, `prost-build`, `protobuf-src` | Primary wire format |
| Serialisation (v1 legacy) | `capnp`, `capnp-rpc` | Legacy only, not for new code |
| Async runtime | `tokio` | |
| QUIC transport | `quinn` | |
| Middleware | `tower` | |
| Storage | `rusqlite` with `bundled-sqlcipher` | |
| Zeroisation | `zeroize` | |
| Internal serialisation | `bincode` | For MLS entities and file-backed store |
Do not introduce new dependencies without justification. In particular:
- No alternative async runtimes (async-std, smol).
- No alternative serialisation formats (protobuf, MessagePack, JSON) for wire
protocol use.
- No alternative serialisation formats for wire protocol use (new code must
use Protobuf via `prost`; Cap'n Proto is legacy-only).
- No alternative crypto libraries unless the preferred crate lacks required
functionality.
@@ -225,9 +232,9 @@ Background sweep is deferred to M6 (requires persistent storage).
### Linting
- `cargo clippy` with default lints. No `#[allow(clippy::...)]` without a
- `cargo clippy --workspace -- -D warnings`. No `#[allow(clippy::...)]` without a
comment explaining why the lint is suppressed.
- CI treats clippy warnings as errors.
- CI treats clippy warnings as errors (`-D warnings`).
### Naming
@@ -256,7 +263,7 @@ Before presenting any code for review, verify:
- [ ] No deviation from these standards.
- [ ] Doc comments on all public items.
- [ ] Tests for all new functionality (see [Testing Strategy](testing.md)).
- [ ] `cargo fmt`, `cargo clippy`, and `cargo test --workspace` all pass.
- [ ] `cargo fmt`, `cargo clippy --workspace -- -D warnings`, and `cargo test --workspace` all pass.
---

View File

@@ -1,8 +1,8 @@
# Testing Strategy
This page describes the testing structure, conventions, and current coverage for
quicproquo. All tests run with `cargo test --workspace` and must pass before
any code is merged.
quicproquo. All tests run with `cargo test --workspace` (or `just test`) and
must pass before any code is merged.
For the coding standards that tests must follow, see
[Coding Standards](coding-standards.md).
@@ -17,55 +17,103 @@ Unit tests live alongside the code they test, in `#[cfg(test)] mod tests` blocks
at the bottom of each source file. They test individual functions and types in
isolation.
**quicproquo-core:**
**quicproquo-core (96 tests):**
| Module | Tests | What they cover |
|--------|-------|----------------|
| `codec` | 7 tests | Length-prefixed frame encoding/decoding, edge cases (empty payload, max size, partial frame, exact boundary) |
| `keypair` | 3 tests | Ed25519 keypair generation, public key extraction, deterministic re-derivation |
| `group` | 2 tests | Group round-trip (create + add + join + send + recv), group\_id lifecycle |
| `hybrid_kem` | 11 tests | Encapsulate/decapsulate round-trip, key generation, combiner correctness, wrong-key rejection, serialisation |
| `codec` | 7 | Length-prefixed frame encoding/decoding, edge cases (empty payload, max size, partial frame, exact boundary) |
| `keypair` | 3 | Ed25519 keypair generation, public key extraction, deterministic re-derivation |
| `group` | 2 | Group round-trip (create + add + join + send + recv), group\_id lifecycle |
| `hybrid_kem` | 11 | Encapsulate/decapsulate round-trip, key generation, combiner correctness, wrong-key rejection, serialisation |
| `opaque_auth` | 12 | OPAQUE registration + login full flow, bad password rejection |
| `mls_*` | 61 | MLS key schedule, member add/remove, Welcome processing, key exhaustion |
**quicproquo-proto:**
**quicproquo-rpc (18 tests):**
| Module | Tests | What they cover |
|--------|-------|----------------|
| `lib` | 3 tests | Cap'n Proto builder/reader round-trip, canonical serialisation, schema validation |
| `framing` | 8 | Wire framing round-trips, method ID encoding, length-prefix correctness |
| `dispatch` | 10 | Handler dispatch, method not found, middleware chain, timeout enforcement |
### Integration Tests
**quicproquo-sdk (30 tests):**
Integration tests live in `crates/quicproquo-client/tests/` and test the
full client-server interaction. Each test spawns a server using `tokio::spawn`
within the same test binary, then runs client operations against it.
| Module | Tests | What they cover |
|--------|-------|----------------|
| `client` | 15 | `QpqClient` connect, send, receive, event broadcast |
| `conversation_store` | 15 | `ConversationStore` CRUD, pagination, message ordering |
| File | Milestone | What it covers |
|------|-----------|---------------|
| `auth_service.rs` | M2 | KeyPackage upload via AS, KeyPackage fetch (single-use consume semantics), identity key validation |
| `mls_group.rs` | M3 | Full MLS round-trip: register state, create group, add member via Welcome, send encrypted message, receive and decrypt |
**quicproquo-server (65 tests):**
| Module | Tests | What they cover |
|--------|-------|----------------|
| `auth` | 20 | OPAQUE registration, login, session management, rate limiting |
| `node_service` | 20 | KeyPackage upload/fetch, message enqueue/deliver, sealed sender |
| `storage` | 15 | `FileBackedStore` and `SqlStore` CRUD, MLS entity serialisation |
| `federation` | 10 | Federation peer relay, mTLS validation, domain routing |
**quicproquo-kt (21 tests):**
| Module | Tests | What they cover |
|--------|-------|----------------|
| `merkle_log` | 21 | Merkle tree insertion, consistency proofs, root hash correctness |
**quicproquo-p2p (34 tests):**
| Module | Tests | What they cover |
|--------|-------|----------------|
| iroh mesh | 34 | P2P peer discovery, relay, mesh join/leave |
### Integration and E2E Tests
E2E tests live in `crates/quicproquo-client/tests/e2e.rs` (20 tests) and
exercise the full client-server stack in-process. Each test spawns a real server
using `tokio::spawn`, runs client operations against it, and asserts on the
results.
**quicproquo-client unit (16 tests):**
| File | What it covers |
|------|---------------|
| `src/lib.rs` | CLI command parsing, client state machine, error formatting |
**quicproquo-client E2E (20 tests):**
| Test | What it covers |
|------|---------------|
| `auth_failure` | Rejected OPAQUE login (wrong password) |
| `message_ordering` | Sequential message delivery order preserved |
| `opaque_flow` | Full OPAQUE registration + login round-trip |
| `key_exhaustion` | Behaviour when KeyPackage queue is empty |
| `rate_limit` | Rate limiting rejects excess requests |
| `mls_group_round_trip` | Full MLS group: create, add member, send, receive |
| `keypackage_single_use` | KeyPackage consumed on first fetch |
| and 13 more | Additional protocol scenarios |
### Test Pattern
All integration tests follow the same pattern:
All E2E tests follow the same pattern:
```rust
#[tokio::test]
async fn test_something() {
// 1. Start server in background
// 1. Acquire shared lock to avoid port conflicts
let _lock = AUTH_LOCK.lock().await;
// 2. Start server in background
let server_handle = tokio::spawn(async move {
server::run(config).await.unwrap();
server::run(config).await.expect("server failed");
});
// 2. Wait for server to be ready
// 3. Wait for server to be ready
tokio::time::sleep(Duration::from_millis(100)).await;
// 3. Run client operations
// 4. Run client operations
let result = client::do_something(server_addr).await;
// 4. Assert
// 5. Assert
assert!(result.is_ok());
// ...
// 5. Cleanup
// 6. Cleanup
server_handle.abort();
}
```
@@ -80,25 +128,41 @@ server process.
### Full Workspace
```bash
just test
# or
cargo test --workspace
```
This runs all unit tests and integration tests across all four crates.
This runs all unit tests and integration tests across all nine crates (301 tests total).
### E2E Tests (serialised)
The E2E test suite shares an `AUTH_LOCK` `tokio::Mutex` to prevent port binding
conflicts when tests run in parallel. Always run E2E tests with a single thread:
```bash
cargo test -p quicproquo-client --test e2e -- --test-threads 1
```
Running without `--test-threads 1` may cause intermittent bind errors if two
tests try to use the same port concurrently.
### Single Crate
```bash
cargo test -p quicproquo-core
cargo test -p quicproquo-proto
cargo test -p quicproquo-rpc
cargo test -p quicproquo-sdk
cargo test -p quicproquo-server
cargo test -p quicproquo-client
cargo test -p quicproquo-kt
cargo test -p quicproquo-p2p
```
### Single Test
```bash
cargo test -p quicproquo-core -- codec::tests::test_round_trip
cargo test -p quicproquo-client --test mls_group
cargo test -p quicproquo-client --test e2e -- opaque_flow --test-threads 1
```
### With Output
@@ -111,17 +175,18 @@ cargo test --workspace -- --nocapture
## Current Results
All tests pass as of the M3 milestone on branch `feat/m1-noise-transport`.
All 301 tests pass on branch `v2`.
Summary:
| Crate | Unit Tests | Integration Tests | Total |
|-------|-----------|-------------------|-------|
| `quicproquo-core` | 23 | -- | 23 |
| `quicproquo-proto` | 3 | -- | 3 |
| `quicproquo-server` | 0 | -- | 0 |
| `quicproquo-client` | 0 | 5 | 5 |
| **Total** | **26** | **5** | **31** |
| Crate | Unit / Integration Tests | E2E Tests | Total |
|-------|--------------------------|-----------|-------|
| `quicproquo-core` | 96 | -- | 96 |
| `quicproquo-rpc` | 18 | -- | 18 |
| `quicproquo-sdk` | 30 | -- | 30 |
| `quicproquo-server` | 65 | -- | 65 |
| `quicproquo-kt` | 21 | -- | 21 |
| `quicproquo-p2p` | 34 | -- | 34 |
| `quicproquo-client` | 16 unit + 1 doctest | 20 | 37 |
| **Total** | **281** | **20** | **301** |
---
@@ -155,37 +220,41 @@ fn fetch_consumes_keypackage_single_use() { ... }
Tests must not depend on external services, network access, or filesystem state
outside the test's temporary directory. The `tokio::spawn` pattern for
client-server tests ensures everything runs in-process.
E2E tests ensures everything runs in-process.
### Determinism
Tests must be deterministic. If randomness is needed (e.g., key generation),
the test must not depend on specific random values -- only on the properties of
the test must not depend on specific random values only on the properties of
the output (correct length, successful round-trip, etc.).
### No `.unwrap()` in Test Setup
`.unwrap()` is acceptable in test assertions, but test setup that fails silently
is not. Use `expect("descriptive message")` on setup operations so failures
report clearly.
---
## Planned Testing Enhancements
The following testing improvements are planned for future milestones:
### Fuzzing Targets (M5+)
Fuzz testing for parser and deserialisation code:
- **Cap'n Proto message parser:** Feed arbitrary bytes to the Cap'n Proto reader
and verify it either parses correctly or returns a typed error (no panics,
no undefined behaviour).
- **Protobuf message parser:** Feed arbitrary bytes to `prost::Message::decode`
on each generated type and verify it either parses correctly or returns a
typed error (no panics, no undefined behaviour).
- **MLS message handler:** Feed arbitrary `MLSMessage` bytes to the
`GroupMember::receive_message` path.
Tool: `cargo-fuzz` with `libfuzzer`.
### Golden-Wire Fixtures (M5+)
Serialised test vectors for regression testing across versions:
- Capture the wire bytes of known-good Cap'n Proto messages (Envelope, Auth,
Delivery structs) at the current version.
- Capture the wire bytes of known-good Protobuf messages at the current version.
- Store as `.bin` files in `tests/fixtures/`.
- Each test deserialises the fixture and verifies the expected field values.
- When the wire format changes, fixtures are updated with a version bump.
@@ -200,8 +269,7 @@ version N-1 (and vice versa):
- Build two versions of the binary (current and previous release).
- Run the older server with the newer client and verify all RPCs succeed.
- Run the newer server with the older client and verify graceful degradation
(legacy mode works, new features return clean errors).
- Run the newer server with the older client and verify graceful degradation.
### Criterion Benchmarks (M5)
@@ -210,15 +278,14 @@ Performance benchmarks using [Criterion.rs](https://docs.rs/criterion/):
- Key generation latency (Ed25519, X25519, ML-KEM-768).
- MLS encap/decap (KeyPackage generation, Welcome processing).
- Group-add latency scaling: 2, 10, 100, 1000 members.
- Cap'n Proto serialise/deserialise throughput.
- Protobuf serialise/deserialise throughput.
Benchmarks run separately from tests (`cargo bench`) and are not part of the
CI gate, but are tracked for regression detection.
### Docker-based E2E Tests (Phase 5)
End-to-end tests using `testcontainers-rs` (see
[Future Research: Testcontainers-rs](../roadmap/future-research.md#testcontainers-rs)):
End-to-end tests using `testcontainers-rs`:
- Spin up server container from the Docker image.
- Run client operations from the test process against the containerised server.
@@ -232,4 +299,3 @@ End-to-end tests using `testcontainers-rs` (see
- [Coding Standards](coding-standards.md) -- quality requirements for test code
- [Milestones](../roadmap/milestones.md) -- which tests were added at each milestone
- [Production Readiness WBS](../roadmap/production-readiness.md) -- Phase 5 (E2E Harness and Security Tests)
- [Future Research: Testcontainers-rs](../roadmap/future-research.md#testcontainers-rs) -- Docker-based testing

View File

@@ -1,233 +0,0 @@
# Bot SDK
The `quicproquo-bot` crate provides a high-level SDK for building automated
agents on the quicproquo network. Bots authenticate with OPAQUE, send and
receive E2E encrypted messages through MLS, and can be driven programmatically
or via a JSON pipe interface for shell integration.
---
## Adding the dependency
```toml
[dependencies]
quicproquo-bot = { path = "../crates/quicproquo-bot" }
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
anyhow = "1"
```
---
## Quick start
```rust,no_run
use quicproquo_bot::{Bot, BotConfig};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let config = BotConfig::new("127.0.0.1:7000", "bot-user", "bot-password")
.ca_cert("server-cert.der")
.state_path("bot-state.bin");
let bot = Bot::connect(config).await?;
// Send a DM
bot.send_dm("alice", "Hello from bot!").await?;
// Poll for messages
loop {
for msg in bot.receive(5000).await? {
println!("{}: {}", msg.sender, msg.text);
if msg.text.starts_with("!echo ") {
bot.send_dm(&msg.sender, &msg.text[6..]).await?;
}
}
}
}
```
---
## Configuration
`BotConfig` uses a builder pattern. The only required arguments are the server
address, username, and password:
```rust,no_run
# use quicproquo_bot::BotConfig;
let config = BotConfig::new("127.0.0.1:7000", "my-bot", "secret123")
.ca_cert("certs/server-cert.der") // TLS CA certificate (DER format)
.server_name("my-server.example") // TLS SNI (default: "localhost")
.state_path("my-bot-state.bin") // Persistent state file
.state_password("encrypt-me") // State file encryption password
.device_id("bot-device-1"); // Device identifier
```
| Method | Default | Description |
|-------------------|-----------------------|-------------|
| `ca_cert()` | `"server-cert.der"` | Path to the server's CA certificate in DER format. |
| `server_name()` | `"localhost"` | TLS server name for certificate validation. |
| `state_path()` | `"bot-state.bin"` | Path to the bot's encrypted state file. |
| `state_password()` | None (unencrypted) | Password for encrypting the state file at rest. |
| `device_id()` | None | Device ID reported to the server in auth tokens. |
---
## Sending messages
```rust,no_run
# use quicproquo_bot::Bot;
# async fn example(bot: &Bot) -> anyhow::Result<()> {
// Send a plaintext DM — encryption is handled internally via MLS
bot.send_dm("alice", "Hello!").await?;
# Ok(())
# }
```
`send_dm` resolves the username, establishes or joins the MLS group for the DM
channel, encrypts the plaintext, and delivers it through the server. Each call
opens a fresh QUIC connection (stateless reconnect pattern).
---
## Receiving messages
```rust,no_run
# use quicproquo_bot::Bot;
# async fn example(bot: &Bot) -> anyhow::Result<()> {
// Wait up to 5 seconds for pending messages
let messages = bot.receive(5000).await?;
for msg in &messages {
println!("[seq={}] {}: {}", msg.seq, msg.sender, msg.text);
}
// For binary/non-UTF-8 content, use receive_raw
let raw_messages = bot.receive_raw(5000).await?;
for payload in &raw_messages {
println!("received {} bytes", payload.len());
}
# Ok(())
# }
```
The `Message` struct contains:
| Field | Type | Description |
|----------|----------|-------------|
| `sender` | `String` | The sender's username. |
| `text` | `String` | Decrypted plaintext content (UTF-8). |
| `seq` | `u64` | Sequence number. |
---
## Resolving users
```rust,no_run
# use quicproquo_bot::Bot;
# async fn example(bot: &Bot) -> anyhow::Result<()> {
let identity_key = bot.resolve_user("alice").await?;
println!("alice's identity key: {} bytes", identity_key.len());
# Ok(())
# }
```
---
## Identity inspection
```rust,no_run
# use quicproquo_bot::Bot;
# fn example(bot: &Bot) {
println!("username: {}", bot.username());
println!("identity key (hex): {}", bot.identity_key_hex());
let raw_key: [u8; 32] = bot.identity_key();
# }
```
---
## Pipe mode (stdin/stdout JSON lines)
For shell integration, the bot SDK supports a JSON-lines pipe interface. Each
line on stdin is a JSON command; results are written to stdout as JSON lines.
### Supported actions
**Send a message:**
```json
{"action": "send", "to": "alice", "text": "hello from pipe"}
```
Response:
```json
{"status": "ok", "action": "send"}
```
**Receive pending messages:**
```json
{"action": "recv", "timeout_ms": 5000}
```
Response:
```json
{"status": "ok", "messages": [{"sender": "peer", "text": "hi", "seq": 0}]}
```
**Resolve a username:**
```json
{"action": "resolve", "username": "alice"}
```
Response:
```json
{"status": "ok", "identity_key": "ab12cd34..."}
```
### Error responses
All actions return an error object on failure:
```json
{"error": "OPAQUE login: connection refused"}
```
### Shell examples
```bash
# Send via pipe
echo '{"action":"send","to":"alice","text":"hello"}' | my-bot-binary
# Receive via pipe
echo '{"action":"recv","timeout_ms":5000}' | my-bot-binary
# Use with jq for pretty output
echo '{"action":"recv","timeout_ms":3000}' | my-bot-binary | jq .
```
---
## Architecture notes
- **Stateless reconnect**: Each `send_dm` and `receive` call opens a fresh QUIC
connection. There is no persistent connection to manage.
- **MLS encryption**: All messages are end-to-end encrypted via MLS (RFC 9420).
The bot SDK wraps the client library's `cmd_send` and
`receive_pending_plaintexts` functions.
- **State persistence**: The bot's identity seed and MLS group state are stored
in the state file. Losing this file means losing the bot's identity.
- **Cap'n Proto !Send**: RPC calls run on a `tokio::task::LocalSet` because
`capnp-rpc` is `!Send`.
---
## Next steps
- [Running the Client](running-the-client.md) -- CLI subcommands and REPL
- [Server Hooks](../internals/server-hooks.md) -- extend the server with plugins
- [Demo Walkthrough](demo-walkthrough.md) -- step-by-step messaging scenario

View File

@@ -1,6 +1,6 @@
# Building from Source
This page covers compiling the workspace, running the test suite, and understanding the build-time Cap'n Proto code generation step.
This page covers compiling the workspace, running the test suite, and the `just` convenience commands available for common development tasks.
---
@@ -12,14 +12,25 @@ From the repository root:
cargo build --workspace
```
This compiles all four crates:
Or using the `just` shortcut:
```bash
just build
```
This compiles all nine crates in the workspace:
| Crate | Type | Purpose |
|---|---|---|
| `quicproquo-core` | library | Crypto primitives, MLS `GroupMember` state machine, hybrid KEM |
| `quicproquo-proto` | library | Cap'n Proto schemas, generated types, envelope serialisation helpers |
| `quicproquo-server` | binary | Unified Authentication + Delivery Service (`NodeService`) |
| `quicproquo-client` | binary | CLI client with subcommands (`ping`, `register`, `send`, `recv`, etc.) |
| `quicproquo-proto` | library | Protobuf schemas (prost), generated types, method ID constants |
| `quicproquo-kt` | library | Key Transparency Merkle log |
| `quicproquo-plugin-api` | library | `#![no_std]` C-ABI plugin interface (`HookVTable`) |
| `quicproquo-rpc` | library | QUIC RPC framing, server dispatcher, client, Tower middleware |
| `quicproquo-sdk` | library | `QpqClient`, event broadcast, `ConversationStore` |
| `quicproquo-server` | binary | Unified Authentication + Delivery Service (`qpq-server`) |
| `quicproquo-client` | binary | CLI client (`qpq`) with REPL and subcommands |
| `quicproquo-p2p` | library | iroh P2P layer (compiled when the `mesh` feature is enabled) |
For a release build with LTO, symbol stripping, and single codegen unit:
@@ -39,55 +50,72 @@ strip = "symbols"
---
## `just` commands
A `justfile` at the repository root provides shortcuts for common tasks:
| Command | Equivalent | Description |
|---|---|---|
| `just build` | `cargo build --workspace` | Build all crates |
| `just test` | `cargo test --workspace` | Run full test suite |
| `just lint` | `cargo clippy --workspace -- -D warnings` | Check for warnings (CI-strict) |
| `just fmt` | `cargo fmt --all -- --check` | Check formatting |
| `just fmt-fix` | `cargo fmt --all` | Auto-format |
| `just proto` | `cargo build -p quicproquo-proto` | Trigger Protobuf codegen |
| `just rpc` | `cargo build -p quicproquo-rpc` | Build RPC framework only |
| `just sdk` | `cargo build -p quicproquo-sdk` | Build client SDK only |
| `just server` | `cargo build -p quicproquo-server` | Build server only |
| `just client` | `cargo build -p quicproquo-client` | Build CLI client only |
| `just clean` | `cargo clean` | Remove build artifacts |
---
## Running the test suite
```bash
just test
# or
cargo test --workspace
```
The test suite includes:
The E2E tests use a shared `AUTH_LOCK` mutex to prevent port conflicts. Run them
with a single thread to avoid flaky failures:
- **`quicproquo-proto`**: Round-trip serialisation tests for Cap'n Proto `Envelope` messages (Ping, Pong, corrupted-input error handling).
- **`quicproquo-core`**: Two-party MLS round-trip (`create_group` / `add_member` / `send_message` / `receive_message`), group ID lifecycle assertions.
- **`quicproquo-client`**: Integration tests for MLS group operations and auth service interactions (require a running server or use in-process mocks).
```bash
cargo test --workspace -- --test-threads 1
```
To run tests for a single crate:
```bash
cargo test -p quicproquo-core
cargo test -p quicproquo-server
cargo test -p quicproquo-rpc
```
---
## Cap'n Proto code generation
## Protobuf code generation
The `quicproquo-proto` crate does not contain hand-written Rust types for wire messages. Instead, its `build.rs` script invokes the `capnp` compiler at build time to generate Rust source from the `.capnp` schema files.
The `quicproquo-proto` crate does not contain hand-written Rust types for wire messages. Instead, its `build.rs` script uses `prost-build` to generate Rust source from the `.proto` schema files in `proto/qpq/v1/`.
### How it works
1. `build.rs` locates the workspace-root `schemas/` directory (two levels above `crates/quicproquo-proto/`).
2. It invokes `capnpc::CompilerCommand` on all four schema files:
- `schemas/envelope.capnp` -- top-level wire envelope with `MsgType` discriminant
- `schemas/auth.capnp` -- `AuthenticationService` RPC interface
- `schemas/delivery.capnp` -- `DeliveryService` RPC interface
- `schemas/node.capnp` -- `NodeService` RPC interface (unified AS + DS)
1. `build.rs` invokes `prost_build::Config::new()` on the 11 schema files under `proto/`.
2. `prost-build` locates the `protoc` binary via the `protobuf-src` crate, which compiles and vendors a compatible `protoc` binary at build time. **No system installation of `protoc` is required.**
3. The generated Rust source is written to `$OUT_DIR` (Cargo's build output directory).
4. `src/lib.rs` includes the generated code via `include!(concat!(env!("OUT_DIR"), "/envelope_capnp.rs"))` and similar macros for each schema.
4. `src/lib.rs` includes the generated code via `include!(concat!(env!("OUT_DIR"), "/..."))` macros.
### Rebuild triggers
The `build.rs` script emits `cargo:rerun-if-changed` directives for each schema file. If you modify a `.capnp` file, the next `cargo build` will automatically re-run code generation.
The `build.rs` script emits `cargo:rerun-if-changed` directives for each `.proto` file. Modifying a schema triggers automatic re-generation on the next `cargo build`.
### Schema include path
The `src_prefix` is set to the `schemas/` directory so that inter-schema imports (e.g., `using Auth = import "auth.capnp".Auth;` inside `node.capnp`) resolve correctly.
### Design constraints of quicproquo-proto
### Design constraints of `quicproquo-proto`
The proto crate is intentionally restricted:
- **No crypto** -- key material never enters this crate.
- **No I/O** -- callers own the transport; this crate only converts bytes to types and back.
- **No I/O** -- callers own the transport; this crate converts bytes to types and back.
- **No async** -- pure synchronous data-layer code.
For details on the wire format, see the [Wire Format Reference](../wire-format/overview.md).
@@ -96,27 +124,9 @@ For details on the wire format, see the [Wire Format Reference](../wire-format/o
## Troubleshooting
### `capnp` binary not found
### Slow first build
**Symptom:**
```
Cap'n Proto schema compilation failed.
Is `capnp` installed? (apt-get install capnproto / brew install capnp)
```
**Fix:** Install the Cap'n Proto compiler for your platform. See [Prerequisites](prerequisites.md) for platform-specific instructions.
Verify it is on your `PATH`:
```bash
which capnp
capnp --version
```
### Version mismatch between `capnp` CLI and `capnpc` Rust crate
The workspace uses `capnpc = "0.19"` (the Rust bindings for the Cap'n Proto compiler). If your system `capnp` binary is significantly older or newer, generated code may be incompatible. The recommended approach is to use a `capnp` binary whose major version matches the `capnpc` crate version. On most systems, the package manager version is compatible.
The first build downloads and compiles all dependencies (including `openmls`, `quinn`, `rustls`, `prost-build`, `protobuf-src`, etc.). This can take several minutes depending on your hardware. The `protobuf-src` compilation step is the most time-consuming on a cold cache. Subsequent builds are incremental and much faster.
### linker errors on macOS with Apple Silicon
@@ -126,14 +136,18 @@ If you see linker errors related to `ring` or `aws-lc-sys` (used transitively by
xcode-select --install
```
### Slow first build
### E2E tests failing with port conflicts
The first build downloads and compiles all dependencies (including `openmls`, `quinn`, `rustls`, `capnp-rpc`, etc.). This can take several minutes depending on your hardware. Subsequent builds are incremental and much faster.
Run E2E tests with `--test-threads 1` to serialise the tests and avoid bind conflicts on the shared test port:
```bash
cargo test -p quicproquo-client --test e2e -- --test-threads 1
```
---
## Next steps
- [Running the Server](running-the-server.md) -- start the NodeService endpoint
- [Running the Server](running-the-server.md) -- start the server endpoint
- [Running the Client](running-the-client.md) -- CLI subcommands and usage examples
- [Docker Deployment](docker.md) -- build and run in containers

View File

@@ -37,26 +37,23 @@ services:
context: .
dockerfile: docker/Dockerfile
ports:
- "7000:7000"
- "7000:7000/udp"
environment:
RUST_LOG: "info"
QPQ_LISTEN: "0.0.0.0:7000"
healthcheck:
test: ["CMD", "bash", "-c", "echo '' > /dev/tcp/localhost/7000"]
interval: 5s
timeout: 3s
retries: 10
start_period: 10s
QPQ_DATA_DIR: "/var/lib/quicproquo"
QPQ_PRODUCTION: "true"
volumes:
- server-data:/var/lib/quicproquo
restart: unless-stopped
volumes:
server-data:
```
### Port mapping
The container exposes port `7000` (QUIC/UDP). The `ports` directive maps host port `7000` to the container's `7000`. Note that QUIC uses UDP, so ensure your firewall allows UDP traffic on this port.
### Health check
The health check uses a TCP connection probe (`/dev/tcp/localhost/7000`). While QUIC is a UDP protocol, the TCP probe verifies that the process is running and the port is bound. A QUIC-aware health check (e.g., using the client's `ping` command) would be more precise but requires the client binary in the runtime image.
The container exposes port `7000` (QUIC/UDP). Note that QUIC uses UDP, so ensure your firewall allows UDP traffic on this port.
### Restart policy
@@ -73,18 +70,34 @@ The Dockerfile at `docker/Dockerfile` uses a two-stage build to produce a minima
```dockerfile
FROM rust:bookworm AS builder
RUN apt-get update \
&& apt-get install -y --no-install-recommends capnproto \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
# Copy manifests first so dependency layers are cached independently of source.
COPY Cargo.toml Cargo.lock ./
COPY crates/quicproquo-core/Cargo.toml crates/quicproquo-core/Cargo.toml
# ... (all 9 crate manifests)
# Create dummy source files for dependency caching.
RUN mkdir -p ... && echo 'fn main() {}' > ...
# Schemas must exist before the proto crate's build.rs runs.
COPY schemas/ schemas/
# Build dependencies only (cache layer).
RUN cargo build --release --bin qpq-server 2>/dev/null || true
# Copy real source and build for real.
COPY crates/ crates/
RUN cargo build --release --bin qpq-server
```
Key steps:
1. **Base image**: `rust:bookworm` (Debian Bookworm with the Rust toolchain pre-installed).
2. **Install `capnproto`**: Required by `quicproquo-proto/build.rs` to compile `.capnp` schemas at build time.
3. **Copy manifests first**: `Cargo.toml` and `Cargo.lock` are copied before source code. Dummy `main.rs` / `lib.rs` stubs are created so that `cargo build` can resolve and cache the dependency graph. This ensures that dependency compilation is cached in a separate Docker layer -- subsequent builds that only change source code skip the dependency compilation step entirely.
4. **Copy schemas**: The `schemas/` directory is copied before the dependency build because `quicproquo-proto/build.rs` requires the `.capnp` files during compilation.
5. **Copy real source and build**: After the dependency cache layer, real source files are copied in and `cargo build --release` is run.
2. **No system compiler required**: Unlike v1, the builder stage does not install `capnproto`. The v2 Protobuf compiler is vendored by `protobuf-src` and compiled automatically as part of `cargo build`.
3. **Copy manifests first**: `Cargo.toml` and `Cargo.lock` are copied before source code with dummy stubs so that dependency compilation is cached in a separate Docker layer.
4. **Copy schemas**: The `schemas/` directory is copied before the dependency build because `quicproquo-proto/build.rs` references it.
5. **Copy real source and build**: After the dependency cache layer, real source files are copied in and `cargo build --release` produces the final binary.
### Stage 2: Runtime (`debian:bookworm-slim`)
@@ -97,39 +110,50 @@ RUN apt-get update \
COPY --from=builder /build/target/release/qpq-server /usr/local/bin/qpq-server
RUN groupadd --system qpq \
&& useradd --system --gid qpq --no-create-home --shell /usr/sbin/nologin qpq \
&& mkdir -p /var/lib/quicproquo \
&& chown qpq:qpq /var/lib/quicproquo
EXPOSE 7000
ENV RUST_LOG=info \
QPQ_LISTEN=0.0.0.0:7000
VOLUME ["/var/lib/quicproquo"]
USER nobody
ENV RUST_LOG=info \
QPQ_LISTEN=0.0.0.0:7000 \
QPQ_DATA_DIR=/var/lib/quicproquo \
QPQ_TLS_CERT=/var/lib/quicproquo/server-cert.der \
QPQ_TLS_KEY=/var/lib/quicproquo/server-key.der \
QPQ_PRODUCTION=true
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD test -f /var/lib/quicproquo/server-cert.der || exit 1
USER qpq
CMD ["qpq-server"]
```
Key characteristics:
- **Minimal image**: No Rust toolchain, no `capnp` compiler, no build artifacts.
- **`ca-certificates`**: Included for future HTTPS calls (e.g., ACME certificate provisioning or key sync endpoints).
- **Non-root execution**: The container runs as `nobody` for defense in depth.
- **Default port**: The Dockerfile defaults to port `7000` via `QPQ_LISTEN`, but the `docker-compose.yml` overrides this to `7000` for consistency with the development workflow.
> **Note**: The `EXPOSE 7000` directive in the Dockerfile and the `QPQ_LISTEN=0.0.0.0:7000` override in `docker-compose.yml` mean the effective listen port is `7000` when using Compose. If you run the Docker image directly without Compose, the server will listen on `7000` by default.
- **Minimal image**: No Rust toolchain, no build tools, no `protoc` binary.
- **`ca-certificates`**: Included for future HTTPS calls (e.g., ACME certificate provisioning).
- **Dedicated user**: The container runs as the `qpq` system user (not `root`) for defense in depth.
- **Named volume**: `/var/lib/quicproquo` is declared as a `VOLUME` for data persistence.
- **`QPQ_PRODUCTION=true`**: The runtime image defaults to production mode, requiring pre-existing TLS certificates and a strong auth token.
---
## Volume persistence
The server stores its state (TLS certificates, KeyPackages, delivery queues, hybrid keys) in the data directory (default `data/`). To persist this data across container restarts, mount a volume:
The server stores its state (TLS certificates, KeyPackages, delivery queues, OPAQUE setup, KT log) in the data directory. Mount a volume to persist this data across container restarts:
```yaml
services:
server:
# ... existing config ...
volumes:
- server-data:/data
environment:
QPQ_DATA_DIR: "/data"
- server-data:/var/lib/quicproquo
volumes:
server-data:
@@ -138,10 +162,16 @@ volumes:
Or use a bind mount for easier inspection:
```bash
docker compose run \
-v $(pwd)/server-data:/data \
-e QPQ_DATA_DIR=/data \
server
mkdir -p ./server-data
docker run -d \
--name quicproquo \
-p 7000:7000/udp \
-v "$(pwd)/server-data:/var/lib/quicproquo" \
-e QPQ_ALLOW_INSECURE_AUTH=true \
-e QPQ_PRODUCTION=false \
-e RUST_LOG=info \
qpq-server
```
Without a volume, all server state (including TLS certificates and message queues) is lost when the container is removed. The server will generate a new self-signed certificate on each fresh start, which means clients will need the new certificate to connect.
@@ -156,19 +186,18 @@ To build the Docker image without starting a container:
docker build -t qpq-server -f docker/Dockerfile .
```
To run it manually:
To run it in development mode (without production validation):
```bash
docker run -d \
--name quicproquo \
-p 7000:7000/udp \
-e QPQ_LISTEN=0.0.0.0:7000 \
-e QPQ_ALLOW_INSECURE_AUTH=true \
-e QPQ_PRODUCTION=false \
-e RUST_LOG=info \
qpq-server
```
Note the `/udp` suffix on the port mapping -- QUIC runs over UDP.
---
## Connecting the client to a containerised server
@@ -176,8 +205,8 @@ Note the `/udp` suffix on the port mapping -- QUIC runs over UDP.
When the server runs in Docker with `docker compose up`, the client can connect from the host:
```bash
# Extract the server's TLS cert from the container
docker compose cp server:/data/server-cert.der ./data/server-cert.der
# Extract the server's TLS cert from the container volume
docker compose cp server:/var/lib/quicproquo/server-cert.der ./data/server-cert.der
# Connect
cargo run -p quicproquo-client -- ping \
@@ -185,7 +214,7 @@ cargo run -p quicproquo-client -- ping \
--server-name localhost
```
If you mounted a volume (e.g., `./server-data:/data`), the certificate is directly accessible at `./server-data/server-cert.der`.
If you mounted a bind volume (e.g., `./server-data:/var/lib/quicproquo`), the certificate is directly accessible at `./server-data/server-cert.der`.
---

View File

@@ -1,171 +0,0 @@
# Code Generators (qpq-gen)
The `qpq-gen` CLI tool scaffolds new plugins, bots, RPC methods, and hook
events for the quicproquo ecosystem.
## Installation
```bash
cargo install --path crates/quicproquo-gen
```
Or run directly from the workspace:
```bash
cargo run -p quicproquo-gen -- <subcommand>
```
## Subcommands
### `qpq-gen plugin <name>` -- Server Plugin
Scaffolds a standalone Cargo project for a server plugin compiled as a shared
library (`cdylib`). The generated plugin implements the `HookVTable` C ABI
and is loaded by the server at startup via `--plugin-dir`.
```bash
qpq-gen plugin rate-limiter
qpq-gen plugin audit-log --output /tmp/plugins
```
**Generated files:**
```
rate_limiter/
Cargo.toml # cdylib crate depending on quicproquo-plugin-api
README.md # Build and install instructions
src/lib.rs # Plugin skeleton with qpq_plugin_init entry point
```
The template includes:
- `qpq_plugin_init` -- called by the server on load; populates the `HookVTable`
- `on_message_enqueue` -- sample hook that rejects payloads larger than 1 MB
- `error_message` -- returns the rejection reason as a C string
- `destroy` -- frees the plugin state
**What to customize:** Replace the `on_message_enqueue` logic with your own
policy. Add more hooks by setting additional fields on the `HookVTable`
(`on_auth`, `on_channel_created`, `on_fetch`, `on_user_registered`,
`on_batch_enqueue`).
**Build and install:**
```bash
cd rate_limiter
cargo build --release
cp target/release/librate_limiter.so /path/to/plugins/
qpq-server --plugin-dir /path/to/plugins/
```
### `qpq-gen bot <name>` -- Bot Project
Scaffolds a standalone bot project using the Bot SDK. The generated binary
connects to a quicproquo server, authenticates via OPAQUE, and runs a
message-handling loop.
```bash
qpq-gen bot echo-bot
qpq-gen bot moderation-bot --output /tmp/bots
```
**Generated files:**
```
moderation_bot/
Cargo.toml # Binary crate depending on quicproquo-bot + tokio
README.md # Quick-start and command reference
src/main.rs # Bot skeleton with handle_message dispatcher
```
The template ships with four built-in commands as examples:
| Command | Description |
|-----------------|---------------------------|
| `!help` | List available commands |
| `!echo <text>` | Echo back the text |
| `!whoami` | Show the sender's username|
| `!ping` | Respond with "pong!" |
**Configuration** is read from environment variables:
| Variable | Default |
|-------------------|----------------------|
| `QPQ_SERVER` | `127.0.0.1:7000` |
| `QPQ_USERNAME` | `<bot-name>` |
| `QPQ_PASSWORD` | `changeme` |
| `QPQ_CA_CERT` | `server-cert.der` |
| `QPQ_STATE_PATH` | `<bot-name>-state.bin` |
**What to customize:** Edit the `handle_message` function in `src/main.rs`
to add your own command handlers. Return `Some(response)` to reply, or
`None` to stay silent.
**Run:**
```bash
cd moderation_bot
QPQ_SERVER=127.0.0.1:7000 \
QPQ_USERNAME=moderation_bot \
QPQ_PASSWORD=changeme \
QPQ_CA_CERT=path/to/server-cert.der \
cargo run
```
### `qpq-gen rpc <Name>` -- RPC Method Guide
Prints a step-by-step guide for adding a new Cap'n Proto RPC method to the
server. This generator does not create files; it outputs instructions and
code snippets to copy into the appropriate locations.
```bash
qpq-gen rpc listChannels
```
The `Name` argument should be in camelCase (e.g., `listChannels`). The
generator derives the `snake_case` form automatically for file and function
names.
**Steps covered:**
1. **Schema** -- Add the method to the `interface NodeService` block in
`schemas/node.capnp`, then rebuild with `cargo build -p quicproquo-proto`
2. **Handler module** -- Create
`crates/quicproquo-server/src/node_service/<name>.rs` with the handler
implementation (template code is printed)
3. **Registration** -- Wire the handler into `node_service/mod.rs`
4. **Storage** (if needed) -- Add a method to the `Store` trait and implement
it in `sql_store.rs` and `storage.rs`
5. **Hook** (optional) -- Run `qpq-gen hook <name>` to let plugins observe
the new RPC
6. **Verify** -- `cargo build -p quicproquo-server && cargo test -p quicproquo-server`
### `qpq-gen hook <name>` -- Hook Event Guide
Prints a step-by-step guide for adding a new server hook event that plugins
can observe. Like `rpc`, this generator outputs instructions rather than
creating files.
```bash
qpq-gen hook message_deleted
```
The `name` argument should be in `snake_case` (e.g., `message_deleted`). The
generator derives the `PascalCase` form for struct names.
**Steps covered:**
1. **Event struct** -- Define `MessageDeletedEvent` in
`crates/quicproquo-server/src/hooks.rs`
2. **Trait method** -- Add `on_message_deleted` to the `ServerHooks` trait
with a default no-op implementation
3. **Tracing** -- Implement the hook in `TracingHooks` with a `tracing::info!`
call
4. **Plugin API** -- Add a C-compatible `CMessageDeletedEvent` struct and an
`on_message_deleted` field to `HookVTable` in
`crates/quicproquo-plugin-api/src/lib.rs`
5. **Plugin dispatch** -- Wire the conversion and dispatch in
`plugin_loader.rs`
6. **Call site** -- Fire the hook from the relevant RPC handler in
`node_service/`
7. **Verify** -- Build and test `quicproquo-plugin-api` and
`quicproquo-server`

View File

@@ -1,6 +1,6 @@
# Prerequisites
Before building quicproquo you need a Rust toolchain and the Cap'n Proto schema compiler. Docker is optional but useful for reproducible builds and deployment.
Before building quicproquo you need a Rust toolchain. No other system tools are required — Protobuf compilation is handled automatically at build time by the `protobuf-src` crate, which vendors the `protoc` compiler. Docker is optional and useful for reproducible builds and deployment.
---
@@ -23,52 +23,15 @@ rustc --version # should print 1.77.0 or later
cargo --version
```
The workspace depends on several crates that use procedural macros (`serde_derive`, `clap_derive`, `tls_codec_derive`, `thiserror`). These compile during the build step and require no additional system libraries beyond what `rustc` ships.
The workspace depends on several crates that use procedural macros (`serde_derive`, `clap_derive`, `tls_codec_derive`, `thiserror`, `prost-derive`). These compile during the build step and require no additional system libraries beyond what `rustc` ships.
---
## Cap'n Proto compiler (`capnp`)
## No external compiler dependencies
The `quicproquo-proto` crate runs a `build.rs` script that invokes the `capnp` binary at compile time to generate Rust types from the `.capnp` schema files in `schemas/`. The `capnp` binary must be on your `PATH`.
In v2, all wire-format serialisation uses [Protobuf](https://protobuf.dev/) via the `prost` crate. The `quicproquo-proto` crate's `build.rs` script drives code generation through `prost-build`, which in turn uses the `protobuf-src` crate to compile and use a vendored copy of `protoc`. **You do not need to install `protoc` or any other system compiler.**
### Debian / Ubuntu
```bash
sudo apt-get update
sudo apt-get install -y capnproto
```
### macOS (Homebrew)
```bash
brew install capnp
```
### Verify installation
```bash
capnp --version
# Expected output: Cap'n Proto version X.Y.Z
```
If `capnp` is not found, the build will fail with an error from `capnpc::CompilerCommand`:
```
Cap'n Proto schema compilation failed. Is `capnp` installed?
(apt-get install capnproto / brew install capnp)
```
See [Building from Source -- Troubleshooting](building.md#troubleshooting) for more details.
### Other platforms
| Platform | Install command |
|---|---|
| Fedora / RHEL | `dnf install capnproto` |
| Arch Linux | `pacman -S capnproto` |
| Nix | `nix-env -iA nixpkgs.capnproto` |
| Windows (vcpkg) | `vcpkg install capnproto` |
| From source | [capnproto.org/install.html](https://capnproto.org/install.html) |
The legacy Cap'n Proto schemas (`schemas/`) are still present for reference, but the v2 runtime and RPC framework use Protobuf exclusively.
---
@@ -84,7 +47,7 @@ docker --version # 20.10+
docker compose version # v2+
```
The provided `docker/Dockerfile` is a multi-stage build that installs `capnproto` in the builder stage, so you do **not** need the `capnp` binary on your host when building via Docker.
The `docker/Dockerfile` is a multi-stage build that does not install any extra system packages in the builder stage — `protobuf-src` takes care of the Protobuf compiler at compile time.
See [Docker Deployment](docker.md) for full instructions.
@@ -95,7 +58,7 @@ See [Docker Deployment](docker.md) for full instructions.
| Dependency | Required? | How to check |
|---|---|---|
| Rust stable 1.77+ | Yes | `rustc --version` |
| `capnp` CLI | Yes (host builds) | `capnp --version` |
| `protoc` CLI | No (vendored automatically) | n/a |
| Docker + Compose | No (container builds only) | `docker --version` / `docker compose version` |
Once all prerequisites are satisfied, proceed to [Building from Source](building.md).

View File

@@ -1,34 +1,39 @@
# Running the Server
The quicproquo server is a single binary (`qpq-server`) that exposes a unified **NodeService** endpoint combining Authentication Service (KeyPackage management) and Delivery Service (message relay) operations over a single QUIC + TLS 1.3 connection.
The quicproquo server is a single binary (`qpq-server`) that exposes a unified **NodeService** endpoint combining Authentication Service (OPAQUE registration/login, KeyPackage management) and Delivery Service (message relay) operations over a single QUIC + TLS 1.3 connection.
---
## Quick start
```bash
cargo run -p quicproquo-server
cargo run -p quicproquo-server -- --allow-insecure-auth
```
On first launch the server will:
1. Create the `data/` directory if it does not exist.
2. Generate a self-signed TLS certificate and private key (`data/server-cert.der`, `data/server-key.der`) with SANs `localhost`, `127.0.0.1`, and `::1`.
3. Open a QUIC endpoint on `0.0.0.0:7000`.
4. Begin accepting connections.
3. Generate and persist an OPAQUE `ServerSetup` for authentication.
4. Open a QUIC endpoint on `0.0.0.0:7000`.
5. Begin accepting connections.
You should see output similar to:
```
2025-01-01T00:00:00.000000Z INFO quicproquo_server: generated self-signed TLS certificate cert="data/server-cert.der" key="data/server-key.der"
2025-01-01T00:00:00.000000Z INFO quicproquo_server: accepting QUIC connections addr="0.0.0.0:7000"
2026-01-01T00:00:00.000000Z INFO qpq_server: generated self-signed TLS certificate cert="data/server-cert.der" key="data/server-key.der"
2026-01-01T00:00:00.000000Z INFO qpq_server: accepting QUIC connections addr="0.0.0.0:7000"
```
> **Development note:** `--allow-insecure-auth` bypasses the requirement for a static bearer token. Do not use this flag in production.
---
## Configuration
All configuration is available via CLI flags and environment variables. Environment variables take precedence when both are specified.
All configuration is available via CLI flags, environment variables, or a TOML config file (`qpq-server.toml` by default, overridden with `--config`). CLI flags take precedence over the config file.
### Core flags
| Purpose | CLI flag | Env var | Default |
|---|---|---|---|
@@ -36,32 +41,99 @@ All configuration is available via CLI flags and environment variables. Environm
| TLS certificate (DER) | `--tls-cert` | `QPQ_TLS_CERT` | `data/server-cert.der` |
| TLS private key (DER) | `--tls-key` | `QPQ_TLS_KEY` | `data/server-key.der` |
| Data directory | `--data-dir` | `QPQ_DATA_DIR` | `data` |
| TOML config file | `--config` | `QPQ_CONFIG` | `qpq-server.toml` |
| Log level | -- | `RUST_LOG` | `info` |
### Authentication flags
| Purpose | CLI flag | Env var | Default |
|---|---|---|---|
| Static bearer token | `--auth-token` | `QPQ_AUTH_TOKEN` | (none) |
| Skip token requirement (dev only) | `--allow-insecure-auth` | `QPQ_ALLOW_INSECURE_AUTH` | `false` |
| Sealed sender mode | `--sealed-sender` | `QPQ_SEALED_SENDER` | `false` |
### Storage flags
| Purpose | CLI flag | Env var | Default |
|---|---|---|---|
| Storage backend | `--store-backend` | `QPQ_STORE_BACKEND` | `file` |
| SQLCipher DB path | `--db-path` | `QPQ_DB_PATH` | `data/qpq.db` |
| SQLCipher encryption key | `--db-key` | `QPQ_DB_KEY` | (empty = plaintext) |
### Transport and timeout flags
| Purpose | CLI flag | Env var | Default |
|---|---|---|---|
| Drain timeout (graceful shutdown) | `--drain-timeout` | `QPQ_DRAIN_TIMEOUT` | `30` s |
| Per-RPC timeout | `--rpc-timeout` | `QPQ_RPC_TIMEOUT` | `30` s |
| Storage operation timeout | `--storage-timeout` | `QPQ_STORAGE_TIMEOUT` | `10` s |
### Extension flags
| Purpose | CLI flag | Env var | Default |
|---|---|---|---|
| Plugin directory | `--plugin-dir` | `QPQ_PLUGIN_DIR` | (none) |
| WebSocket bridge address | `--ws-listen` | `QPQ_WS_LISTEN` | (none) |
| WebTransport address | `--webtransport-listen` | `QPQ_WEBTRANSPORT_LISTEN` | (none) |
| Federation | `--federation-enabled` | `QPQ_FEDERATION_ENABLED` | `false` |
| Federation domain | `--federation-domain` | `QPQ_FEDERATION_DOMAIN` | (none) |
| Federation listen address | `--federation-listen` | `QPQ_FEDERATION_LISTEN` | `0.0.0.0:7001` |
| Redact audit logs | `--redact-logs` | `QPQ_REDACT_LOGS` | `false` |
| Metrics listen address | `--metrics-listen` | `QPQ_METRICS_LISTEN` | (none) |
### Examples
```bash
# Listen on a custom port
cargo run -p quicproquo-server -- --listen 0.0.0.0:9000
# Development: no auth token required
cargo run -p quicproquo-server -- --allow-insecure-auth
# Use pre-existing TLS credentials
# Listen on a custom port
cargo run -p quicproquo-server -- --allow-insecure-auth --listen 0.0.0.0:5001
# Use SQLCipher storage backend
cargo run -p quicproquo-server -- \
--tls-cert /etc/quicproquo/cert.der \
--tls-key /etc/quicproquo/key.der
--allow-insecure-auth \
--store-backend sql \
--db-path data/qpq.db \
--db-key mysecretkey
# Load server plugins from a directory
cargo run -p quicproquo-server -- \
--allow-insecure-auth \
--plugin-dir /path/to/plugins
# Enable WebSocket bridge for browser clients
cargo run -p quicproquo-server -- \
--allow-insecure-auth \
--ws-listen 0.0.0.0:9000
# Via environment variables
QPQ_LISTEN=0.0.0.0:9000 \
QPQ_LISTEN=0.0.0.0:5001 \
QPQ_ALLOW_INSECURE_AUTH=true \
RUST_LOG=debug \
cargo run -p quicproquo-server
```
### Production deployment
---
Set `QPQ_PRODUCTION=1` (or `true` / `yes`) so the server enforces production checks:
## Production deployment
- **Auth:** A non-empty `QPQ_AUTH_TOKEN` is required; the value `devtoken` is rejected.
- **TLS:** Existing cert and key files are required (auto-generation is disabled).
- **SQL store:** When `--store-backend=sql`, a non-empty `QPQ_DB_KEY` is required. An empty key leaves the database unencrypted on disk and is not acceptable for production.
Set `QPQ_PRODUCTION=true` to enable production validation. The server enforces:
- `--allow-insecure-auth` is **prohibited**.
- `QPQ_AUTH_TOKEN` must be set, non-empty, at least 16 characters, and not equal to `devtoken`.
- TLS cert and key files must already exist (auto-generation is disabled).
- When `--store-backend=sql`, `QPQ_DB_KEY` must be non-empty.
```bash
QPQ_PRODUCTION=true \
QPQ_AUTH_TOKEN=<strong-token> \
QPQ_TLS_CERT=/etc/quicproquo/cert.der \
QPQ_TLS_KEY=/etc/quicproquo/key.der \
QPQ_STORE_BACKEND=sql \
QPQ_DB_KEY=<strong-db-key> \
qpq-server
```
---
@@ -69,7 +141,7 @@ Set `QPQ_PRODUCTION=1` (or `true` / `yes`) so the server enforces production che
### Self-signed certificate auto-generation
If the files at `--tls-cert` and `--tls-key` do not exist when the server starts, it generates a self-signed certificate using the `rcgen` crate. The generated certificate includes three Subject Alternative Names:
If the files at `--tls-cert` and `--tls-key` do not exist when the server starts in non-production mode, it generates a self-signed certificate using the `rcgen` crate. The generated certificate includes three Subject Alternative Names:
- `localhost`
- `127.0.0.1`
@@ -89,61 +161,37 @@ To use a certificate issued by a CA or a custom self-signed certificate:
2. Point the server at them:
```bash
cargo run -p quicproquo-server -- \
--allow-insecure-auth \
--tls-cert cert.der \
--tls-key key.der
```
3. Distribute the certificate (or its CA root) to clients so they can verify the server. The client's `--ca-cert` flag accepts a DER file.
### TLS configuration details
### TLS configuration
The server's TLS stack is configured as follows:
- **Protocol versions**: TLS 1.3 only (`rustls::version::TLS13`). TLS 1.2 and below are rejected.
- **Client authentication**: Disabled (`with_no_client_auth()`). The server does not request a client certificate. Client identity is established at the MLS layer via Ed25519 credentials, not at the TLS layer.
- **ALPN**: The server advertises `b"capnp"` as the application-layer protocol.
---
## ALPN negotiation
Both the server and client must agree on the ALPN token `b"capnp"` during the TLS handshake. This token is hardcoded in the server's TLS configuration:
```rust
tls.alpn_protocols = vec![b"capnp".to_vec()];
```
If a client connects with a different (or no) ALPN token, the QUIC handshake will fail with an ALPN mismatch error.
- **Protocol versions**: TLS 1.3 only. TLS 1.2 and below are rejected.
- **Client authentication**: Disabled. Client identity is established at the MLS/OPAQUE layer, not at the TLS layer.
- **ALPN**: The server advertises `b"qpq/1"` as the application-layer protocol.
---
## Storage
The server persists its state to the data directory (`--data-dir`, default `data/`):
The server persists its state to the data directory (`--data-dir`, default `data/`).
### File-backed store (default)
| File | Contents |
|---|---|
| `data/server-cert.der` | TLS certificate (DER) |
| `data/server-key.der` | TLS private key (DER) |
| `data/keypackages.bin` | `bincode`-serialised map of identity keys to KeyPackage queues |
| `data/deliveries.bin` | `bincode`-serialised map of `(channelId, recipientKey)` to message queues |
| `data/hybridkeys.bin` | `bincode`-serialised map of identity keys to hybrid (X25519 + ML-KEM-768) public keys |
| `data/keypackages.bin` | `bincode`-serialised KeyPackage queues |
| `data/deliveries.bin` | `bincode`-serialised delivery queues |
| `data/hybridkeys.bin` | `bincode`-serialised hybrid (X25519 + ML-KEM-768) public keys |
Storage is implemented by the `FileBackedStore` in `crates/quicproquo-server/src/storage.rs`. Every mutation (upload, enqueue, fetch) flushes the entire map to disk synchronously. This is suitable for proof-of-concept workloads but not production traffic. See [Storage Backend](../internals/storage-backend.md) for details.
### SQL store (recommended for production)
---
## Connection handling
Each incoming QUIC connection is handled in a `tokio::task::spawn_local` task on a shared `LocalSet`. The `capnp-rpc` library uses `Rc<RefCell<>>` internally, making it `!Send`, which is why all RPC tasks must run on a `LocalSet` rather than being spawned with `tokio::spawn`.
The connection lifecycle:
1. Accept incoming QUIC connection.
2. Complete TLS 1.3 handshake.
3. Accept a bidirectional QUIC stream.
4. Wrap the stream in a `capnp_rpc::twoparty::VatNetwork`.
5. Bootstrap a `NodeService` RPC endpoint.
6. Serve requests until the client disconnects or an error occurs.
When `--store-backend=sql`, all data is persisted in a SQLCipher-encrypted database at `--db-path`. The SQLite driver is statically bundled (`rusqlite` with `bundled-sqlcipher`).
---
@@ -153,20 +201,27 @@ The server uses `tracing` with `tracing-subscriber` and respects the `RUST_LOG`
```bash
# Default: info level
RUST_LOG=info cargo run -p quicproquo-server
RUST_LOG=info cargo run -p quicproquo-server -- --allow-insecure-auth
# Debug level for detailed RPC tracing
RUST_LOG=debug cargo run -p quicproquo-server
# Trace level for maximum verbosity
RUST_LOG=trace cargo run -p quicproquo-server
RUST_LOG=debug cargo run -p quicproquo-server -- --allow-insecure-auth
# Filter to specific crates
RUST_LOG=quicproquo_server=debug,quinn=warn cargo run -p quicproquo-server
RUST_LOG=quicproquo_server=debug,quinn=warn cargo run -p quicproquo-server -- --allow-insecure-auth
```
---
## Graceful shutdown
The server handles `SIGINT` (Ctrl-C) and `SIGTERM`. On receipt of a shutdown signal:
1. New connections are rejected immediately (`endpoint.close`).
2. In-flight RPC tasks are given `--drain-timeout` seconds (default: 30) to finish.
3. The process exits cleanly.
---
## Next steps
- [Running the Client](running-the-client.md) -- connect to the server and exercise the CLI