docs: rewrite mdBook documentation for v2 architecture
Update 25+ files and add 6 new pages to reflect the v2 migration from Cap'n Proto to Protobuf framing over QUIC. Integrates SDK and Operations docs into the mdBook, restructures SUMMARY.md, and rewrites the wire format, architecture, and protocol sections with accurate v2 content.
This commit is contained in:
@@ -1,279 +1,202 @@
|
||||
# Authentication Service Internals
|
||||
|
||||
The Authentication Service (AS) stores and distributes single-use MLS
|
||||
KeyPackages. It is one of the two logical services exposed through the unified
|
||||
`NodeService` RPC interface. The AS also stores hybrid (X25519 + ML-KEM-768)
|
||||
public keys for post-quantum envelope encryption.
|
||||
The Authentication Service handles user registration and login via the OPAQUE asymmetric password-authenticated key exchange (PAKE) protocol. It also manages MLS KeyPackages, hybrid post-quantum keys, and session token issuance.
|
||||
|
||||
This page covers the server-side implementation of KeyPackage storage, the
|
||||
`Auth` struct validation logic, and the hybrid key endpoints.
|
||||
This page covers the server-side OPAQUE flow, session token lifecycle, KeyPackage storage, and hybrid key endpoints.
|
||||
|
||||
**Sources:**
|
||||
- `crates/quicproquo-server/src/main.rs` (RPC handlers, auth validation)
|
||||
- `crates/quicproquo-server/src/storage.rs` (FileBackedStore)
|
||||
- `schemas/node.capnp` (wire schema)
|
||||
- `crates/quicproquo-server/src/domain/` (OPAQUE handlers, session management)
|
||||
- `crates/quicproquo-server/src/sql_store.rs` (SqlStore persistence)
|
||||
- `proto/qpq/v1/auth.proto` (wire schema)
|
||||
|
||||
---
|
||||
|
||||
## OPAQUE Protocol
|
||||
|
||||
quicproquo uses the OPAQUE asymmetric PAKE (RFC 9497) for user authentication. The password never leaves the client and is never known to the server. The server stores an OPAQUE registration record derived from the password, but this record cannot be used to recover the password even if the server is fully compromised.
|
||||
|
||||
### Registration (IDs 100-101)
|
||||
|
||||
Registration takes two round trips.
|
||||
|
||||
```text
|
||||
Client Server
|
||||
| |
|
||||
| [1] OpaqueRegisterStartRequest |
|
||||
| username: "alice" |
|
||||
| request: <OPAQUE RegistrationReq> |
|
||||
| ---------------------------------------->|
|
||||
| |
|
||||
| [2] OpaqueRegisterStartResponse |
|
||||
| response: <OPAQUE RegistrationResp> |
|
||||
| <----------------------------------------|
|
||||
| |
|
||||
| [3] OpaqueRegisterFinishRequest |
|
||||
| username: "alice" |
|
||||
| upload: <OPAQUE RegistrationUpload> |
|
||||
| identity_key: <Ed25519 pubkey> |
|
||||
| ---------------------------------------->|
|
||||
| |
|
||||
| [4] OpaqueRegisterFinishResponse |
|
||||
| success: true |
|
||||
| <----------------------------------------|
|
||||
```
|
||||
|
||||
**Step [1]:** The client generates a `RegistrationRequest` blob using the `opaque-ke` crate. This contains a masked version of the password; the server cannot extract the raw password.
|
||||
|
||||
**Step [2]:** The server generates a `RegistrationResponse` using its OPAQUE server keypair and the client's request. The server does not yet persist anything.
|
||||
|
||||
**Step [3]:** The client completes the OPAQUE registration and sends a `RegistrationUpload` blob. This blob contains the password-derived key material (specifically the client's OPAQUE export key envelope and public key). The client also sends its Ed25519 identity public key.
|
||||
|
||||
**Step [4]:** The server stores the `RegistrationUpload` blob as the user's OPAQUE record, indexed by `username`. The Ed25519 identity key is stored alongside the record. Registration fails with `success: false` if the username is already taken.
|
||||
|
||||
### Login (IDs 102-103)
|
||||
|
||||
Login also takes two round trips and produces a session token.
|
||||
|
||||
```text
|
||||
Client Server
|
||||
| |
|
||||
| [1] OpaqueLoginStartRequest |
|
||||
| username: "alice" |
|
||||
| request: <OPAQUE CredentialReq> |
|
||||
| ---------------------------------------->|
|
||||
| |
|
||||
| [2] OpaqueLoginStartResponse |
|
||||
| response: <OPAQUE CredentialResp> |
|
||||
| <----------------------------------------|
|
||||
| |
|
||||
| [3] OpaqueLoginFinishRequest |
|
||||
| username: "alice" |
|
||||
| finalization: <OPAQUE Finalization> |
|
||||
| identity_key: <Ed25519 pubkey> |
|
||||
| ---------------------------------------->|
|
||||
| |
|
||||
| [4] OpaqueLoginFinishResponse |
|
||||
| session_token: <32 bytes> |
|
||||
| <----------------------------------------|
|
||||
```
|
||||
|
||||
**Step [1]:** The client generates a `CredentialRequest` using the `opaque-ke` crate.
|
||||
|
||||
**Step [2]:** The server looks up the user's OPAQUE record by `username` and generates a `CredentialResponse`. If the username is unknown, the server generates a fake response using a blinded dummy record to prevent username enumeration.
|
||||
|
||||
**Step [3]:** The client verifies the server's `CredentialResponse` against the stored password, derives the shared export key, and sends a `CredentialFinalization` blob that proves knowledge of the password. The client also sends its Ed25519 identity key.
|
||||
|
||||
**Step [4]:** The server verifies the `CredentialFinalization`. If verification succeeds and the identity key matches the registered key, the server generates a `session_token` (32 random bytes), stores it in the session table, and returns it to the client. If verification fails, the server returns an error status with an empty `session_token`.
|
||||
|
||||
### Session token lifecycle
|
||||
|
||||
The `session_token` is a 32-byte random bearer credential issued at login. It is:
|
||||
|
||||
- Stored in the SQLCipher `sessions` table (see [Storage Backend](storage-backend.md)).
|
||||
- Included by the client in subsequent QUIC connections for authentication.
|
||||
- Validated by the server on connection establishment; the server rejects connections with unknown or expired tokens.
|
||||
- Invalidated on `DeleteAccount` or explicit logout.
|
||||
|
||||
The `Auth` message in `common.proto` carries the token for federation contexts:
|
||||
|
||||
```protobuf
|
||||
message Auth {
|
||||
bytes access_token = 1;
|
||||
bytes device_id = 2;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## KeyPackage Storage
|
||||
|
||||
### Data Model
|
||||
MLS KeyPackages are single-use by RFC 9420 requirement. The server stores a FIFO queue of KeyPackages per identity key.
|
||||
|
||||
KeyPackages are stored in a `FileBackedStore` using a `Mutex`-protected
|
||||
`HashMap`:
|
||||
### Data model
|
||||
|
||||
```text
|
||||
key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>
|
||||
^ ^
|
||||
| |
|
||||
identity_key FIFO queue of
|
||||
(32-byte Ed25519 TLS-encoded
|
||||
public key) KeyPackage bytes
|
||||
identity_key (32-byte Ed25519 pubkey)
|
||||
-> VecDeque<KeyPackage bytes>
|
||||
```
|
||||
|
||||
Each identity can have multiple KeyPackages queued. This is essential because
|
||||
KeyPackages are single-use (per RFC 9420): once fetched by a peer, they are
|
||||
permanently removed. Clients should upload several KeyPackages to handle
|
||||
concurrent group invitations.
|
||||
Each identity can have multiple KeyPackages queued. Clients should upload several packages after registration so that concurrent group invitations can each consume one without exhausting the supply.
|
||||
|
||||
The map is persisted to `data/keypackages.bin` using bincode serialization,
|
||||
wrapped in the `QueueMapV1` struct. See [Storage Backend](storage-backend.md)
|
||||
for persistence details.
|
||||
|
||||
### uploadKeyPackage
|
||||
|
||||
```capnp
|
||||
uploadKeyPackage @0 (identityKey :Data, package :Data, auth :Auth)
|
||||
-> (fingerprint :Data);
|
||||
```
|
||||
### UploadKeyPackage (ID 300)
|
||||
|
||||
**Handler logic:**
|
||||
|
||||
1. **Parse parameters.** Extract `identityKey`, `package`, and `auth`.
|
||||
1. Validate `identity_key` (exactly 32 bytes) and `package` (non-empty, <= 1 MiB).
|
||||
2. Compute `SHA-256(package)` as the fingerprint.
|
||||
3. Push the package to the back of the identity's queue in the SQL store.
|
||||
4. Return the fingerprint.
|
||||
|
||||
2. **Validate auth.** Call `validate_auth()` (see [Auth Validation](#auth-validation)
|
||||
below).
|
||||
The fingerprint allows the uploading client to detect server-side tampering. A peer that fetches a KeyPackage can compare its SHA-256 hash against the fingerprint communicated out-of-band.
|
||||
|
||||
3. **Validate inputs:**
|
||||
|
||||
| Check | Constraint | Error Message |
|
||||
|-------|------------|---------------|
|
||||
| Identity key length | Exactly 32 bytes | `"identityKey must be exactly 32 bytes, got {n}"` |
|
||||
| Package non-empty | `package.len() > 0` | `"package must not be empty"` |
|
||||
| Package size cap | `package.len() <= 1,048,576` | `"package exceeds max size (1048576 bytes)"` |
|
||||
|
||||
4. **Compute fingerprint.** `SHA-256(package_bytes)` produces a 32-byte digest.
|
||||
|
||||
5. **Store.** `FileBackedStore::upload_key_package(identity_key, package)` pushes
|
||||
the package to the back of the identity's `VecDeque` and flushes to disk.
|
||||
|
||||
6. **Return fingerprint.** The SHA-256 hash is set in the response.
|
||||
|
||||
The fingerprint allows the uploading client to verify that the server stored the
|
||||
exact bytes it sent. See [KeyPackage Exchange Flow](keypackage-exchange.md) for
|
||||
the client-side verification logic.
|
||||
|
||||
### fetchKeyPackage
|
||||
|
||||
```capnp
|
||||
fetchKeyPackage @1 (identityKey :Data, auth :Auth) -> (package :Data);
|
||||
```
|
||||
### FetchKeyPackage (ID 301)
|
||||
|
||||
**Handler logic:**
|
||||
|
||||
1. **Parse and validate** `identityKey` (32 bytes) and `auth`.
|
||||
1. Validate `identity_key` (exactly 32 bytes).
|
||||
2. Pop from the front of the identity's queue (atomic operation).
|
||||
3. Return the package bytes, or empty bytes if the queue is empty.
|
||||
|
||||
2. **Pop from queue.** `FileBackedStore::fetch_key_package(identity_key)` calls
|
||||
`VecDeque::pop_front()` on the identity's queue, removing and returning the
|
||||
oldest KeyPackage. The updated map is flushed to disk.
|
||||
|
||||
3. **Return.** If a KeyPackage was available, set it in the response. If the
|
||||
queue was empty (or the identity has no entry), return empty `Data`.
|
||||
|
||||
**Single-use semantics:** The `pop_front()` operation ensures each KeyPackage is
|
||||
returned exactly once. This is critical for MLS security -- reusing a KeyPackage
|
||||
would allow conflicting group states. The removal is atomic with respect to the
|
||||
`Mutex` lock, so concurrent fetch requests will not receive the same package.
|
||||
|
||||
**Empty response handling:** The client checks `package.is_empty()` to
|
||||
distinguish between "no packages available" and "package fetched." An empty
|
||||
response is not an error -- it means the target identity has exhausted their
|
||||
KeyPackage supply and needs to upload more.
|
||||
|
||||
---
|
||||
|
||||
## Auth Validation
|
||||
|
||||
All `NodeService` RPC methods accept an `Auth` struct:
|
||||
|
||||
```capnp
|
||||
struct Auth {
|
||||
version @0 :UInt16; # 0 = legacy/none, 1 = token-based
|
||||
accessToken @1 :Data; # opaque bearer token
|
||||
deviceId @2 :Data; # optional UUID for auditing
|
||||
}
|
||||
```
|
||||
|
||||
The server validates this struct through the `validate_auth` function:
|
||||
|
||||
```text
|
||||
validate_auth(cfg, auth)
|
||||
|
|
||||
+-- version == 0?
|
||||
| +-- cfg.allow_legacy_v0 == true? -> OK
|
||||
| +-- cfg.allow_legacy_v0 == false? -> ERROR "auth version 0 disabled"
|
||||
|
|
||||
+-- version == 1?
|
||||
| +-- accessToken empty? -> ERROR "requires non-empty accessToken"
|
||||
| +-- cfg.required_token is Some?
|
||||
| | +-- token matches? -> OK
|
||||
| | +-- token mismatch? -> ERROR "invalid accessToken"
|
||||
| +-- cfg.required_token is None? -> OK (any non-empty token accepted)
|
||||
|
|
||||
+-- version >= 2? -> ERROR "unsupported auth version"
|
||||
```
|
||||
|
||||
### AuthConfig
|
||||
|
||||
The server's auth behavior is controlled by `AuthConfig`:
|
||||
|
||||
```rust
|
||||
struct AuthConfig {
|
||||
required_token: Option<Vec<u8>>, // None = accept any token
|
||||
allow_legacy_v0: bool, // true = accept version 0 (no auth)
|
||||
}
|
||||
```
|
||||
|
||||
Configured via CLI flags / environment variables:
|
||||
|
||||
| Flag / Env Var | Default | Purpose |
|
||||
|-----------------------------------|---------|---------|
|
||||
| `--auth-token` / `QPQ_AUTH_TOKEN` | None | Required bearer token. If unset, any non-empty token is accepted for version 1. |
|
||||
| `--allow-auth-v0` / `QPQ_ALLOW_AUTH_V0` | `true` | Whether to accept `auth.version=0` (legacy, unauthenticated) requests. |
|
||||
|
||||
### Version Semantics
|
||||
|
||||
| Version | Meaning | Token Required? |
|
||||
|---------|---------|-----------------|
|
||||
| 0 | Legacy / unauthenticated | No. Token is ignored. Server must have `allow_legacy_v0 = true`. |
|
||||
| 1 | Token-based authentication | Yes. Must be non-empty. Must match `required_token` if configured. |
|
||||
| 2+ | Reserved for future use | Rejected. |
|
||||
|
||||
### Current Limitations
|
||||
|
||||
The current auth implementation is intentionally minimal:
|
||||
|
||||
- **No identity binding.** The access token is not tied to a specific Ed25519
|
||||
identity. Any valid token can upload or fetch KeyPackages for any identity.
|
||||
- **No rate limiting.** There is no per-identity or per-IP rate limiting.
|
||||
- **No token rotation.** Tokens are static strings configured at server startup.
|
||||
- **No device management.** The `deviceId` field is accepted but not used for
|
||||
authorization decisions.
|
||||
|
||||
The [Auth, Devices, and Tokens](../roadmap/authz-plan.md) roadmap item
|
||||
addresses these gaps with a proper token issuance and validation system.
|
||||
The pop is atomic with respect to the store lock, so concurrent fetch requests will not receive the same package. An empty response is not an error -- it means the target has exhausted its KeyPackage supply.
|
||||
|
||||
---
|
||||
|
||||
## Hybrid Key Endpoints
|
||||
|
||||
The AS also stores hybrid (X25519 + ML-KEM-768) public keys for post-quantum
|
||||
envelope encryption. Unlike KeyPackages, hybrid keys are **not single-use** --
|
||||
they are stored persistently and can be fetched multiple times.
|
||||
Hybrid (X25519 + ML-KEM-768) public keys are used for post-quantum sealed envelope encryption. Unlike KeyPackages, hybrid keys are not single-use. Each identity stores exactly one hybrid key; uploading a new key overwrites the previous one.
|
||||
|
||||
### uploadHybridKey
|
||||
|
||||
```capnp
|
||||
uploadHybridKey @6 (identityKey :Data, hybridPublicKey :Data) -> ();
|
||||
```
|
||||
### UploadHybridKey (ID 302)
|
||||
|
||||
**Handler logic:**
|
||||
|
||||
1. Validate `identityKey` (32 bytes) and `hybridPublicKey` (non-empty).
|
||||
2. `FileBackedStore::upload_hybrid_key(identity_key, hybrid_pk)` stores the key,
|
||||
overwriting any previous value for this identity.
|
||||
3. Flushes to `data/hybridkeys.bin`.
|
||||
1. Validate `identity_key` (32 bytes) and `hybrid_public_key` (non-empty).
|
||||
2. Store the hybrid key, overwriting any previous value for this identity.
|
||||
3. Return empty response.
|
||||
|
||||
The storage model is simpler than KeyPackages: a flat
|
||||
`HashMap<Vec<u8>, Vec<u8>>` (identity key to hybrid public key bytes). There is
|
||||
no queue -- each identity has at most one hybrid public key.
|
||||
### FetchHybridKey (ID 303)
|
||||
|
||||
### fetchHybridKey
|
||||
Non-destructive lookup. Returns the stored hybrid public key, or empty bytes if none is stored. The key persists across fetches.
|
||||
|
||||
```capnp
|
||||
fetchHybridKey @7 (identityKey :Data) -> (hybridPublicKey :Data);
|
||||
```
|
||||
### FetchHybridKeys (ID 304)
|
||||
|
||||
**Handler logic:**
|
||||
|
||||
1. Validate `identityKey` (32 bytes).
|
||||
2. Look up the hybrid public key in the store. Unlike `fetchKeyPackage`, this
|
||||
does **not** remove the key -- it can be fetched repeatedly.
|
||||
3. Return the key bytes, or empty `Data` if none is stored.
|
||||
|
||||
See [Hybrid KEM](../protocol-layers/hybrid-kem.md) for how the client uses
|
||||
these keys to wrap MLS payloads in post-quantum envelopes.
|
||||
Batch variant. Returns one key per input identity key in the same order. Missing keys are returned as empty bytes at the corresponding index position.
|
||||
|
||||
---
|
||||
|
||||
## NodeServiceImpl Structure
|
||||
## Key Transparency Integration
|
||||
|
||||
The server-side implementation struct:
|
||||
The key transparency log (a Merkle append-only log) records key revocations and allows clients to audit the integrity of the key directory.
|
||||
|
||||
### RevokeKey (ID 510)
|
||||
|
||||
Appends a revocation entry to the KT Merkle log. Returns the leaf index of the revocation entry. Reasons: `"compromised"`, `"superseded"`, `"user_revoked"`.
|
||||
|
||||
### CheckRevocation (ID 511)
|
||||
|
||||
Returns the revocation status of an identity key: whether revoked, the reason, and the timestamp in milliseconds.
|
||||
|
||||
### AuditKeyTransparency (ID 520)
|
||||
|
||||
Returns a range of entries from the append-only log for client-side Merkle verification. Clients can verify the returned `root` hash against the Merkle tree built from the entries.
|
||||
|
||||
---
|
||||
|
||||
## Server implementation structure
|
||||
|
||||
```rust
|
||||
struct NodeServiceImpl {
|
||||
store: Arc<FileBackedStore>, // shared across connections
|
||||
waiters: Arc<DashMap<Vec<u8>, Arc<Notify>>>, // long-poll notification
|
||||
auth_cfg: Arc<AuthConfig>, // auth policy
|
||||
// Domain handler (quicproquo-server/src/domain/)
|
||||
struct AuthHandler {
|
||||
store: Arc<SqlStore>, // SQLCipher persistence
|
||||
opaque_server: OpaqueServer, // opaque-ke server state
|
||||
}
|
||||
```
|
||||
|
||||
All connections share the same `store` and `waiters` via `Arc`. The
|
||||
`DashMap<Vec<u8>, Arc<Notify>>` is keyed by recipient key and provides the
|
||||
push-notification mechanism for `fetchWait`. See
|
||||
[Delivery Service Internals](delivery-service.md) for the long-polling
|
||||
implementation.
|
||||
All connections share the same `SqlStore` via `Arc`. The OPAQUE server state contains the server's long-term OPAQUE keypair, which is generated on first start and persisted to the database.
|
||||
|
||||
---
|
||||
|
||||
## Connection Model
|
||||
## Related pages
|
||||
|
||||
```text
|
||||
QUIC endpoint (port 7000)
|
||||
+-- TLS 1.3 handshake (self-signed cert by default)
|
||||
+-- Accept bidirectional stream
|
||||
+-- capnp-rpc VatNetwork (Side::Server)
|
||||
+-- NodeServiceImpl { store, waiters, auth_cfg }
|
||||
```
|
||||
|
||||
Each QUIC connection opens one bidirectional stream for Cap'n Proto RPC. The
|
||||
`capnp-rpc` crate uses `Rc<RefCell<>>` internally, making it `!Send`. All RPC
|
||||
tasks run on a `tokio::task::LocalSet` to satisfy this constraint.
|
||||
|
||||
The server generates a self-signed TLS certificate on first start if no
|
||||
certificate files exist. Certificate and key paths are configurable via
|
||||
`--tls-cert` and `--tls-key`.
|
||||
|
||||
---
|
||||
|
||||
## Health Endpoint
|
||||
|
||||
```capnp
|
||||
health @5 () -> (status :Text);
|
||||
```
|
||||
|
||||
A simple readiness probe. Returns `"ok"` unconditionally. No auth validation is
|
||||
performed. Useful for infrastructure health checks and measuring QUIC round-trip
|
||||
time.
|
||||
|
||||
---
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [KeyPackage Exchange Flow](keypackage-exchange.md) -- end-to-end upload and fetch flow including client-side logic
|
||||
- [Delivery Service Internals](delivery-service.md) -- the DS half of NodeService
|
||||
- [Storage Backend](storage-backend.md) -- FileBackedStore persistence model
|
||||
- [GroupMember Lifecycle](group-member-lifecycle.md) -- how KeyPackages are generated and consumed
|
||||
- [Auth, Devices, and Tokens](../roadmap/authz-plan.md) -- planned auth improvements
|
||||
- [NodeService Schema](../wire-format/node-service-schema.md) -- Cap'n Proto schema reference
|
||||
- [Hybrid KEM](../protocol-layers/hybrid-kem.md) -- post-quantum envelope encryption
|
||||
- [Storage Backend](storage-backend.md) -- SqlStore and FileBackedStore persistence
|
||||
- [Auth Schema](../wire-format/auth-schema.md) -- Protobuf wire definitions
|
||||
- [Method ID Reference](../wire-format/envelope-schema.md) -- all 44 method IDs
|
||||
|
||||
@@ -1,21 +1,152 @@
|
||||
# Storage Backend
|
||||
|
||||
quicproquo uses two storage backends: `FileBackedStore` on the server side
|
||||
for KeyPackages and delivery queues, and `DiskKeyStore` on the client side for
|
||||
MLS cryptographic key material. Both follow the same pattern: in-memory data
|
||||
structures backed by optional file persistence, with full serialization on every
|
||||
write.
|
||||
quicproquo uses two storage backends: `SqlStore` on the server side (SQLCipher-encrypted SQLite with Argon2id key derivation) and `DiskKeyStore` on the client side (bincode-serialised file for MLS cryptographic key material).
|
||||
|
||||
**Sources:**
|
||||
- `crates/quicproquo-server/src/storage.rs` (FileBackedStore)
|
||||
- `crates/quicproquo-server/src/sql_store.rs` (SqlStore)
|
||||
- `crates/quicproquo-server/src/storage.rs` (Store trait, FileBackedStore legacy)
|
||||
- `crates/quicproquo-core/src/keystore.rs` (DiskKeyStore, StoreCrypto)
|
||||
|
||||
---
|
||||
|
||||
## FileBackedStore (Server-Side)
|
||||
## SqlStore (Server-Side)
|
||||
|
||||
`FileBackedStore` provides persistent storage for the server's three data
|
||||
domains: KeyPackages, delivery queues, and hybrid public keys.
|
||||
`SqlStore` is the primary server-side storage backend. It wraps SQLCipher (SQLite with AES-256 encryption) via the `rusqlite` crate and provides a connection pool for concurrent access.
|
||||
|
||||
### Encryption
|
||||
|
||||
The database file is encrypted with SQLCipher using a key derived from a server-supplied passphrase. The key is passed as the SQLCipher `PRAGMA key` on connection open. Key derivation uses Argon2id: the server generates a random salt on first start and derives the 32-byte SQLCipher key material from the passphrase using Argon2id with server-configured parameters.
|
||||
|
||||
The database file is opaque without the key; an attacker with filesystem access cannot read any stored data without also compromising the server's key material.
|
||||
|
||||
### Connection pool
|
||||
|
||||
```rust
|
||||
pub struct SqlStore {
|
||||
pool: Vec<Mutex<Connection>>, // default pool_size = 4
|
||||
}
|
||||
```
|
||||
|
||||
`SqlStore` maintains a fixed pool of SQLCipher connections (default: 4). Each request acquires a connection via `try_lock()` on each pool slot (non-blocking fast path), falling back to blocking on the first connection if all are busy. WAL journal mode allows concurrent readers; writers are serialised by SQLite's locking protocol.
|
||||
|
||||
PRAGMA settings applied to every connection:
|
||||
|
||||
| PRAGMA | Value | Effect |
|
||||
|--------|-------|--------|
|
||||
| `journal_mode` | `WAL` | Write-ahead logging for concurrent reads |
|
||||
| `synchronous` | `NORMAL` | fsync on WAL checkpoints only (performance vs. durability trade-off) |
|
||||
| `foreign_keys` | `ON` | Enforce referential integrity |
|
||||
|
||||
### Schema and migrations
|
||||
|
||||
The schema version is tracked via `PRAGMA user_version`. On first open, `SqlStore` applies all pending migrations in order. Migrations are embedded as SQL strings at compile time.
|
||||
|
||||
Current schema version: **13**
|
||||
|
||||
| Migration | Version | Content |
|
||||
|-----------|---------|---------|
|
||||
| `001_initial.sql` | 1 | Users, key_packages, deliveries, hybrid_keys tables |
|
||||
| `002_add_seq.sql` | 3 | Delivery sequence numbers |
|
||||
| `003_channels.sql` | 4 | Channel-aware delivery queues |
|
||||
| `004_federation.sql` | 5 | Federation peer table |
|
||||
| `005_signing_key.sql` | 6 | Server signing key storage |
|
||||
| `006_kt_log.sql` | 7 | Key transparency Merkle log |
|
||||
| `007_add_expiry.sql` | 8 | TTL/expiry columns on deliveries |
|
||||
| `008_devices.sql` | 9 | Device registration table |
|
||||
| `009_sessions.sql` | 10 | Session token table |
|
||||
| `010_blobs.sql` | 11 | Blob storage table |
|
||||
| `011_recovery_bundles.sql` | 12 | Recovery bundle table |
|
||||
| `012_moderation.sql` | 13 | Reports and bans tables |
|
||||
|
||||
If the database's `user_version` is greater than `SCHEMA_VERSION`, the server refuses to open it (downgrade protection).
|
||||
|
||||
### Store trait
|
||||
|
||||
`SqlStore` implements the `Store` trait defined in `storage.rs`:
|
||||
|
||||
```rust
|
||||
pub trait Store: Send + Sync {
|
||||
fn upload_key_package(&self, identity_key: &[u8], package: Vec<u8>) -> Result<(), StorageError>;
|
||||
fn fetch_key_package(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
|
||||
fn upload_hybrid_key(&self, identity_key: &[u8], hybrid_pk: Vec<u8>) -> Result<(), StorageError>;
|
||||
fn fetch_hybrid_key(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
|
||||
fn enqueue(&self, recipient_key: &[u8], channel_id: &[u8], payload: Vec<u8>, ...) -> Result<u64, StorageError>;
|
||||
fn fetch(&self, recipient_key: &[u8], channel_id: &[u8], limit: u32, ...) -> Result<Vec<(u64, Vec<u8>)>, StorageError>;
|
||||
fn ack(&self, recipient_key: &[u8], channel_id: &[u8], seq_up_to: u64, ...) -> Result<(), StorageError>;
|
||||
fn store_session(&self, record: SessionRecord) -> Result<(), StorageError>;
|
||||
fn fetch_session(&self, token: &[u8]) -> Result<Option<SessionRecord>, StorageError>;
|
||||
// ... and more
|
||||
}
|
||||
```
|
||||
|
||||
### Key package storage
|
||||
|
||||
Key packages are stored in the `key_packages` table:
|
||||
|
||||
```sql
|
||||
CREATE TABLE key_packages (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
identity_key BLOB NOT NULL,
|
||||
package_data BLOB NOT NULL,
|
||||
created_at INTEGER NOT NULL DEFAULT (unixepoch())
|
||||
);
|
||||
```
|
||||
|
||||
`upload_key_package` inserts a row. `fetch_key_package` selects and deletes the oldest row for the given identity key in a single transaction (atomic FIFO pop). This guarantees MLS's single-use requirement.
|
||||
|
||||
### Delivery queue storage
|
||||
|
||||
Delivery messages are stored in the `deliveries` table with per-message sequence numbers:
|
||||
|
||||
```sql
|
||||
CREATE TABLE deliveries (
|
||||
seq INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
recipient BLOB NOT NULL,
|
||||
channel_id BLOB NOT NULL DEFAULT '',
|
||||
device_id BLOB NOT NULL DEFAULT '',
|
||||
payload BLOB NOT NULL,
|
||||
expires_at INTEGER, -- NULL = no expiry
|
||||
message_id BLOB -- idempotency key
|
||||
);
|
||||
```
|
||||
|
||||
`enqueue` inserts a row and returns the `seq`. `fetch` selects rows with `seq > last_ack` ordered by `seq` and returns them without deleting. `ack(seq_up_to)` deletes all rows with `seq <= seq_up_to` for the given recipient, channel, and device.
|
||||
|
||||
### Session storage
|
||||
|
||||
Sessions issued after OPAQUE login are stored in the `sessions` table:
|
||||
|
||||
```sql
|
||||
CREATE TABLE sessions (
|
||||
token BLOB NOT NULL PRIMARY KEY,
|
||||
identity BLOB NOT NULL,
|
||||
device_id BLOB,
|
||||
created_at INTEGER NOT NULL DEFAULT (unixepoch()),
|
||||
expires_at INTEGER
|
||||
);
|
||||
```
|
||||
|
||||
The `token` is the 32-byte random session token returned by `OpaqueLoginFinish`. The server validates incoming tokens by looking up this table.
|
||||
|
||||
### Error type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum StorageError {
|
||||
#[error("database error: {0}")]
|
||||
Db(String),
|
||||
#[error("serialization error")]
|
||||
Serde,
|
||||
#[error("not found")]
|
||||
NotFound,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FileBackedStore (Server-Side, Legacy)
|
||||
|
||||
`FileBackedStore` was the original server-side storage backend. It uses bincode-serialised files with in-memory `Mutex`-protected `HashMap` structures. It remains available for development and testing but `SqlStore` is the production backend.
|
||||
|
||||
### Structure
|
||||
|
||||
@@ -24,367 +155,115 @@ pub struct FileBackedStore {
|
||||
kp_path: PathBuf, // keypackages.bin
|
||||
ds_path: PathBuf, // deliveries.bin
|
||||
hk_path: PathBuf, // hybridkeys.bin
|
||||
key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>, // identity -> KP queue
|
||||
deliveries: Mutex<HashMap<ChannelKey, VecDeque<Vec<u8>>>>, // (channel, recipient) -> msg queue
|
||||
hybrid_keys: Mutex<HashMap<Vec<u8>, Vec<u8>>>, // identity -> hybrid PK
|
||||
key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>,
|
||||
deliveries: Mutex<HashMap<ChannelKey, VecDeque<Vec<u8>>>>,
|
||||
hybrid_keys: Mutex<HashMap<Vec<u8>, Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
Each domain has its own `Mutex`-protected in-memory map and its own disk file.
|
||||
The `Mutex` (not `RwLock`) is used because every read-path operation that
|
||||
modifies state (e.g., `pop_front` in `fetch_key_package`) requires exclusive
|
||||
access.
|
||||
File paths under the data directory:
|
||||
|
||||
### Initialization
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `keypackages.bin` | KeyPackage queues (bincode `QueueMapV1`) |
|
||||
| `deliveries.bin` | Delivery queues (bincode `QueueMapV2`) |
|
||||
| `hybridkeys.bin` | Hybrid public keys (bincode `HashMap`) |
|
||||
|
||||
```rust
|
||||
FileBackedStore::open(dir: impl AsRef<Path>) -> Result<Self, StorageError>
|
||||
```
|
||||
|
||||
1. Creates the directory if it does not exist.
|
||||
2. Loads each map from its respective file, or initializes an empty map if the
|
||||
file is missing.
|
||||
3. Returns the initialized store.
|
||||
|
||||
File paths:
|
||||
- `{dir}/keypackages.bin` -- KeyPackage queues
|
||||
- `{dir}/deliveries.bin` -- Delivery queues
|
||||
- `{dir}/hybridkeys.bin` -- Hybrid public keys
|
||||
|
||||
The default data directory is `data/`, configurable via `--data-dir` /
|
||||
`QPQ_DATA_DIR`.
|
||||
|
||||
### Flush-on-Every-Write
|
||||
|
||||
Every mutation serializes the entire in-memory map to disk:
|
||||
|
||||
```text
|
||||
upload_key_package(identity_key, package)
|
||||
|
|
||||
+-- lock key_packages Mutex
|
||||
|
|
||||
+-- map.entry(identity_key).or_default().push_back(package)
|
||||
|
|
||||
+-- flush_kp_map(path, &map)
|
||||
| +-- QueueMapV1 { map: map.clone() }
|
||||
| +-- bincode::serialize(&payload)
|
||||
| +-- fs::write(path, bytes)
|
||||
|
|
||||
+-- unlock Mutex
|
||||
```
|
||||
|
||||
This approach is deliberately simple and correct:
|
||||
- **Crash safety:** Every successful RPC response guarantees the data has been
|
||||
written to the filesystem.
|
||||
- **No partial writes:** The entire map is serialized atomically (though not to
|
||||
a temp file with rename -- this is an MVP trade-off).
|
||||
- **Performance:** Not suitable for production scale. Every write serializes and
|
||||
writes the full map, which is O(n) in the total number of stored entries.
|
||||
|
||||
**Production improvement path:** Replace with a proper database (SQLite, sled,
|
||||
or similar) for incremental writes, WAL-based crash safety, and concurrent
|
||||
access without full serialization.
|
||||
|
||||
### KeyPackage Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `upload_key_package(identity_key, package)` | Push to back of VecDeque; flush |
|
||||
| `fetch_key_package(identity_key)` | Pop from front (FIFO, single-use); flush |
|
||||
|
||||
The KeyPackage map uses the `QueueMapV1` serialization wrapper:
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, Deserialize, Default)]
|
||||
struct QueueMapV1 {
|
||||
map: HashMap<Vec<u8>, VecDeque<Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
### Delivery Queue Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `enqueue(recipient_key, channel_id, payload)` | Construct ChannelKey; push to back; flush |
|
||||
| `fetch(recipient_key, channel_id)` | Construct ChannelKey; drain entire VecDeque; flush |
|
||||
|
||||
The delivery map uses `QueueMapV2` with the compound `ChannelKey`:
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, Deserialize, Clone, Eq, PartialEq, Debug)]
|
||||
pub struct ChannelKey {
|
||||
pub channel_id: Vec<u8>,
|
||||
pub recipient_key: Vec<u8>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Default)]
|
||||
struct QueueMapV2 {
|
||||
map: HashMap<ChannelKey, VecDeque<Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
See [Delivery Service Internals](delivery-service.md) for the full queue model
|
||||
and channel-aware routing semantics.
|
||||
|
||||
### V1/V2 Delivery Map Migration
|
||||
|
||||
The delivery map format evolved from V1 (keyed by recipient key only) to V2
|
||||
(keyed by `ChannelKey` with channel ID + recipient key). The load function
|
||||
handles both formats transparently:
|
||||
|
||||
```rust
|
||||
fn load_delivery_map(path: &Path) -> Result<HashMap<ChannelKey, VecDeque<Vec<u8>>>> {
|
||||
let bytes = fs::read(path)?;
|
||||
|
||||
// Try V2 format first (channel-aware).
|
||||
if let Ok(map) = bincode::deserialize::<QueueMapV2>(&bytes) {
|
||||
return Ok(map.map);
|
||||
}
|
||||
|
||||
// Fallback to legacy V1 format: migrate by setting channel_id = empty.
|
||||
let legacy: QueueMapV1 = bincode::deserialize(&bytes)?;
|
||||
let mut upgraded = HashMap::new();
|
||||
for (recipient_key, queue) in legacy.map.into_iter() {
|
||||
upgraded.insert(
|
||||
ChannelKey { channel_id: Vec::new(), recipient_key },
|
||||
queue,
|
||||
);
|
||||
}
|
||||
Ok(upgraded)
|
||||
}
|
||||
```
|
||||
|
||||
Migration strategy:
|
||||
1. Attempt to deserialize as V2 (`QueueMapV2`). If successful, use as-is.
|
||||
2. If V2 fails, deserialize as V1 (`QueueMapV1`). Migrate each entry by
|
||||
wrapping the recipient key in a `ChannelKey` with an empty `channel_id`.
|
||||
3. The next flush will write V2 format, completing the migration.
|
||||
|
||||
This in-place migration is transparent to clients. Legacy messages (pre-channel
|
||||
routing) appear under the empty channel ID and can still be fetched by clients
|
||||
that pass an empty `channelId`.
|
||||
|
||||
### Hybrid Key Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `upload_hybrid_key(identity_key, hybrid_pk)` | Insert (overwrite); flush |
|
||||
| `fetch_hybrid_key(identity_key)` | Read-only lookup; no flush needed |
|
||||
|
||||
The hybrid key map is a flat `HashMap<Vec<u8>, Vec<u8>>` serialized directly
|
||||
with bincode. Unlike KeyPackages, hybrid keys are not single-use -- they persist
|
||||
until overwritten.
|
||||
|
||||
### Error Type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum StorageError {
|
||||
#[error("io error: {0}")]
|
||||
Io(String),
|
||||
#[error("serialization error")]
|
||||
Serde,
|
||||
}
|
||||
```
|
||||
|
||||
I/O errors (disk full, permission denied) and serialization errors (corrupt
|
||||
file) are the two failure modes. The server converts `StorageError` to
|
||||
`capnp::Error` via the `storage_err` helper for RPC responses.
|
||||
Every write serialises the entire map to disk (O(n) per write). No encryption: data is stored in plaintext. Not recommended for production deployments; use `SqlStore` instead.
|
||||
|
||||
---
|
||||
|
||||
## DiskKeyStore (Client-Side)
|
||||
|
||||
`DiskKeyStore` is the client-side key store that implements the openmls
|
||||
`OpenMlsKeyStore` trait. It holds MLS cryptographic key material -- most
|
||||
importantly, the HPKE init private keys created during KeyPackage generation.
|
||||
`DiskKeyStore` is the client-side key store that implements the openmls `OpenMlsKeyStore` trait. It holds MLS cryptographic key material, most importantly the HPKE init private keys created during KeyPackage generation.
|
||||
|
||||
### Structure
|
||||
|
||||
```rust
|
||||
pub struct DiskKeyStore {
|
||||
path: Option<PathBuf>, // None = ephemeral (in-memory only)
|
||||
path: Option<PathBuf>, // None = ephemeral (in-memory only)
|
||||
values: RwLock<HashMap<Vec<u8>, Vec<u8>>>, // key reference -> serialized MLS entity
|
||||
}
|
||||
```
|
||||
|
||||
The `RwLock` (not `Mutex`) allows concurrent reads. Write operations (store,
|
||||
delete) take an exclusive lock and flush to disk.
|
||||
|
||||
### Modes
|
||||
|
||||
| Mode | Constructor | Persistence |
|
||||
|------|-------------|-------------|
|
||||
| Ephemeral | `DiskKeyStore::ephemeral()` | None. Data exists only in memory. Lost on process exit. |
|
||||
| Persistent | `DiskKeyStore::persistent(path)` | Yes. Every write flushes the full map to disk. Survives process restarts. |
|
||||
| Persistent | `DiskKeyStore::persistent(path)` | Yes. Every write flushes the full map to disk. |
|
||||
|
||||
**Ephemeral mode** is used for tests and the `register` / `demo-group` CLI
|
||||
commands where session resumption is not needed.
|
||||
Persistent mode is used for production clients. The key store path is derived from the state file by changing the extension to `.ks`.
|
||||
|
||||
**Persistent mode** is used for production clients (`register-state`, `invite`,
|
||||
`join`, `send`, `recv` commands). The key store file path is derived from the
|
||||
state file path by changing the extension to `.ks`:
|
||||
### Serialisation format
|
||||
|
||||
```rust
|
||||
fn keystore_path(state_path: &Path) -> PathBuf {
|
||||
let mut path = state_path.to_path_buf();
|
||||
path.set_extension("ks");
|
||||
path
|
||||
}
|
||||
```
|
||||
MLS entities MUST use bincode serialisation. The `DiskKeyStore` implements this with a two-layer scheme:
|
||||
|
||||
So `qpq-state.bin` produces a key store at `quicproquo-state.ks`.
|
||||
1. **Inner layer:** Each MLS entity value (`V: MlsEntity`) is serialised using the openmls-required serialisation format. The `DiskKeyStore` in quicproquo uses bincode for MLS entity values, matching the `OpenMlsKeyStore` trait requirements.
|
||||
2. **Outer layer:** The entire `HashMap<Vec<u8>, Vec<u8>>` is bincode-serialised as the file on disk.
|
||||
|
||||
### Persistence Format
|
||||
|
||||
The key store is serialized as a bincode-encoded `HashMap<Vec<u8>, Vec<u8>>`.
|
||||
Individual values are serialized using `serde_json` (as required by openmls's
|
||||
`MlsEntity` trait bound):
|
||||
**Important:** Do not use Protobuf or JSON for MLS entities. MLS requires bincode for the `DiskKeyStore` in this codebase. Using a different format will produce incompatible key material.
|
||||
|
||||
```rust
|
||||
fn store<V: MlsEntity>(&self, k: &[u8], v: &V) -> Result<(), Self::Error> {
|
||||
let value = serde_json::to_vec(v)?; // MlsEntity -> JSON bytes
|
||||
let mut values = self.values.write().unwrap();
|
||||
let value = bincode::serialize(v)?; // MlsEntity -> bincode bytes
|
||||
let mut values = self.values.write()?;
|
||||
values.insert(k.to_vec(), value);
|
||||
drop(values); // release lock before I/O
|
||||
self.flush() // bincode serialize full map to disk
|
||||
drop(values);
|
||||
self.flush() // bincode-serialize full HashMap to disk
|
||||
}
|
||||
```
|
||||
|
||||
The two-layer serialization (JSON for values, bincode for the map) is a
|
||||
consequence of openmls requiring `serde_json`-compatible serialization for MLS
|
||||
entities, while the outer map uses bincode for compactness.
|
||||
### OpenMlsKeyStore implementation
|
||||
|
||||
### OpenMlsKeyStore Implementation
|
||||
|
||||
| Trait Method | DiskKeyStore Behavior |
|
||||
|--------------|-----------------------|
|
||||
| `store(k, v)` | JSON-serialize value, insert into HashMap, flush to disk |
|
||||
| `read(k)` | Look up key, JSON-deserialize value, return `Option<V>` |
|
||||
| Trait method | DiskKeyStore behaviour |
|
||||
|---|---|
|
||||
| `store(k, v)` | bincode-serialize value, insert into HashMap, flush to disk |
|
||||
| `read(k)` | Look up key, bincode-deserialize value, return `Option<V>` |
|
||||
| `delete(k)` | Remove from HashMap, flush to disk |
|
||||
|
||||
The `read` method does not flush because it does not modify the map. A failed
|
||||
deserialization (corrupt value) returns `None` rather than an error, which
|
||||
matches the openmls `OpenMlsKeyStore` trait signature.
|
||||
### StoreCrypto
|
||||
|
||||
### Flush Behavior
|
||||
|
||||
```rust
|
||||
fn flush(&self) -> Result<(), DiskKeyStoreError> {
|
||||
let Some(path) = &self.path else {
|
||||
return Ok(()); // ephemeral: no-op
|
||||
};
|
||||
let values = self.values.read().unwrap();
|
||||
let bytes = bincode::serialize(&*values)?;
|
||||
fs::create_dir_all(path.parent())?; // ensure parent dir exists
|
||||
fs::write(path, bytes)?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
Like `FileBackedStore`, the flush serializes the entire map on every write.
|
||||
For client-side usage, the map is typically small (a handful of HPKE keys), so
|
||||
this is not a performance concern.
|
||||
|
||||
### Error Type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug, PartialEq, Eq)]
|
||||
pub enum DiskKeyStoreError {
|
||||
#[error("serialization error")]
|
||||
Serialization,
|
||||
#[error("io error: {0}")]
|
||||
Io(String),
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## StoreCrypto
|
||||
|
||||
`StoreCrypto` is a composite type that bundles a `DiskKeyStore` with the
|
||||
`RustCrypto` provider from `openmls_rust_crypto`. It implements the openmls
|
||||
`OpenMlsCryptoProvider` trait, which is the single entry point that openmls
|
||||
uses for all cryptographic operations:
|
||||
`StoreCrypto` bundles `DiskKeyStore` with the `RustCrypto` provider:
|
||||
|
||||
```rust
|
||||
pub struct StoreCrypto {
|
||||
crypto: RustCrypto, // AES-GCM, SHA-256, X25519, Ed25519, etc.
|
||||
key_store: DiskKeyStore, // HPKE init keys, MLS epoch secrets, etc.
|
||||
}
|
||||
|
||||
impl OpenMlsCryptoProvider for StoreCrypto {
|
||||
type CryptoProvider = RustCrypto;
|
||||
type RandProvider = RustCrypto;
|
||||
type KeyStoreProvider = DiskKeyStore;
|
||||
|
||||
fn crypto() -> &RustCrypto { &self.crypto }
|
||||
fn rand() -> &RustCrypto { &self.crypto }
|
||||
fn key_store() -> &DiskKeyStore { &self.key_store }
|
||||
crypto: RustCrypto, // AES-GCM, SHA-256, X25519, Ed25519
|
||||
key_store: DiskKeyStore, // HPKE init keys, MLS epoch secrets
|
||||
}
|
||||
```
|
||||
|
||||
`StoreCrypto` is the `backend` field of [`GroupMember`](group-member-lifecycle.md).
|
||||
It is passed to every openmls operation -- `KeyPackage::builder().build()`,
|
||||
`MlsGroup::new_with_group_id()`, `MlsGroup::new_from_welcome()`,
|
||||
`create_message()`, `process_message()`, etc.
|
||||
|
||||
The critical property is that the **same `StoreCrypto` instance** (and therefore
|
||||
the same `DiskKeyStore`) must be used from `generate_key_package()` through
|
||||
`join_group()`, because the HPKE init private key is stored in and read from
|
||||
this key store.
|
||||
It implements `OpenMlsCryptoProvider` and is the `backend` field of `GroupMember`. The same `StoreCrypto` instance must be used consistently from `generate_key_package()` through `join_group()`, because the HPKE init private key is written at package generation time and read at group join time.
|
||||
|
||||
---
|
||||
|
||||
## Storage Architecture Summary
|
||||
## Storage architecture summary
|
||||
|
||||
```text
|
||||
Server Client
|
||||
====== ======
|
||||
|
||||
FileBackedStore DiskKeyStore
|
||||
+-- key_packages (Mutex<HashMap>) +-- values (RwLock<HashMap>)
|
||||
| Persisted: keypackages.bin | Persisted: {state}.ks
|
||||
| Format: bincode(QueueMapV1) | Format: bincode(HashMap)
|
||||
| | Values: serde_json(MlsEntity)
|
||||
+-- deliveries (Mutex<HashMap>) |
|
||||
| Persisted: deliveries.bin +-- Wrapped by StoreCrypto
|
||||
| Format: bincode(QueueMapV2) | implements OpenMlsCryptoProvider
|
||||
| Migration: V1 -> V2 on load |
|
||||
SqlStore (production) DiskKeyStore
|
||||
+-- SQLCipher-encrypted SQLite +-- values (RwLock<HashMap>)
|
||||
| WAL mode, pool_size=4 | Persisted: {state}.ks
|
||||
| Key: Argon2id(passphrase, salt) | Format: bincode(HashMap<Vec<u8>, Vec<u8>>)
|
||||
| Schema: 13 migrations | Values: bincode(MlsEntity)
|
||||
| Tables: users, key_packages, |
|
||||
| deliveries, sessions, blobs, +-- Wrapped by StoreCrypto
|
||||
| devices, kt_log, recovery_bundles, | implements OpenMlsCryptoProvider
|
||||
| reports, banned_users, ... |
|
||||
| +-- Used by GroupMember.backend
|
||||
+-- hybrid_keys (Mutex<HashMap>)
|
||||
Persisted: hybridkeys.bin
|
||||
Format: bincode(HashMap)
|
||||
FileBackedStore (legacy / dev)
|
||||
+-- keypackages.bin (bincode)
|
||||
+-- deliveries.bin (bincode)
|
||||
+-- hybridkeys.bin (bincode)
|
||||
No encryption. Not for production.
|
||||
```
|
||||
|
||||
### Shared Design Patterns
|
||||
|
||||
Both backends share these characteristics:
|
||||
|
||||
1. **Full-map serialization.** Every write serializes the entire map to disk.
|
||||
Simple, correct, but O(n) per write.
|
||||
|
||||
2. **Bincode format.** The outer map is always bincode-serialized. Compact and
|
||||
fast, but not human-readable and not forward-compatible without wrapper
|
||||
structs.
|
||||
|
||||
3. **No WAL / journaling.** A crash during `fs::write` could leave a corrupt
|
||||
file. For the MVP, this is acceptable -- the data can be regenerated (clients
|
||||
re-upload KeyPackages; delivery messages are ephemeral).
|
||||
|
||||
4. **No compaction.** Empty queues are not removed from the map. Over time, the
|
||||
serialized size can grow with stale entries. A production implementation
|
||||
should periodically compact empty entries.
|
||||
|
||||
5. **Directory creation.** Both backends call `fs::create_dir_all` before
|
||||
writing, ensuring parent directories exist.
|
||||
|
||||
---
|
||||
|
||||
## Related Pages
|
||||
## Related pages
|
||||
|
||||
- [GroupMember Lifecycle](group-member-lifecycle.md) -- how `StoreCrypto` and `DiskKeyStore` are used during MLS operations
|
||||
- [KeyPackage Exchange Flow](keypackage-exchange.md) -- upload and fetch through `FileBackedStore`
|
||||
- [Delivery Service Internals](delivery-service.md) -- delivery queue operations
|
||||
- [Authentication Service Internals](authentication-service.md) -- KeyPackage and hybrid key storage
|
||||
- [Key Lifecycle and Zeroization](../cryptography/key-lifecycle.md) -- how HPKE keys are created and destroyed
|
||||
- [Authentication Service Internals](authentication-service.md) -- KeyPackage and session storage
|
||||
- [Wire Format Overview](../wire-format/overview.md) -- frame format and transport
|
||||
- [Method ID Reference](../wire-format/envelope-schema.md) -- RPC method IDs
|
||||
|
||||
Reference in New Issue
Block a user