docs: rewrite mdBook documentation for v2 architecture
Update 25+ files and add 6 new pages to reflect the v2 migration from Cap'n Proto to Protobuf framing over QUIC. Integrates SDK and Operations docs into the mdBook, restructures SUMMARY.md, and rewrites the wire format, architecture, and protocol sections with accurate v2 content.
This commit is contained in:
@@ -1,21 +1,152 @@
|
||||
# Storage Backend
|
||||
|
||||
quicproquo uses two storage backends: `FileBackedStore` on the server side
|
||||
for KeyPackages and delivery queues, and `DiskKeyStore` on the client side for
|
||||
MLS cryptographic key material. Both follow the same pattern: in-memory data
|
||||
structures backed by optional file persistence, with full serialization on every
|
||||
write.
|
||||
quicproquo uses two storage backends: `SqlStore` on the server side (SQLCipher-encrypted SQLite with Argon2id key derivation) and `DiskKeyStore` on the client side (bincode-serialised file for MLS cryptographic key material).
|
||||
|
||||
**Sources:**
|
||||
- `crates/quicproquo-server/src/storage.rs` (FileBackedStore)
|
||||
- `crates/quicproquo-server/src/sql_store.rs` (SqlStore)
|
||||
- `crates/quicproquo-server/src/storage.rs` (Store trait, FileBackedStore legacy)
|
||||
- `crates/quicproquo-core/src/keystore.rs` (DiskKeyStore, StoreCrypto)
|
||||
|
||||
---
|
||||
|
||||
## FileBackedStore (Server-Side)
|
||||
## SqlStore (Server-Side)
|
||||
|
||||
`FileBackedStore` provides persistent storage for the server's three data
|
||||
domains: KeyPackages, delivery queues, and hybrid public keys.
|
||||
`SqlStore` is the primary server-side storage backend. It wraps SQLCipher (SQLite with AES-256 encryption) via the `rusqlite` crate and provides a connection pool for concurrent access.
|
||||
|
||||
### Encryption
|
||||
|
||||
The database file is encrypted with SQLCipher using a key derived from a server-supplied passphrase. The key is passed as the SQLCipher `PRAGMA key` on connection open. Key derivation uses Argon2id: the server generates a random salt on first start and derives the 32-byte SQLCipher key material from the passphrase using Argon2id with server-configured parameters.
|
||||
|
||||
The database file is opaque without the key; an attacker with filesystem access cannot read any stored data without also compromising the server's key material.
|
||||
|
||||
### Connection pool
|
||||
|
||||
```rust
|
||||
pub struct SqlStore {
|
||||
pool: Vec<Mutex<Connection>>, // default pool_size = 4
|
||||
}
|
||||
```
|
||||
|
||||
`SqlStore` maintains a fixed pool of SQLCipher connections (default: 4). Each request acquires a connection via `try_lock()` on each pool slot (non-blocking fast path), falling back to blocking on the first connection if all are busy. WAL journal mode allows concurrent readers; writers are serialised by SQLite's locking protocol.
|
||||
|
||||
PRAGMA settings applied to every connection:
|
||||
|
||||
| PRAGMA | Value | Effect |
|
||||
|--------|-------|--------|
|
||||
| `journal_mode` | `WAL` | Write-ahead logging for concurrent reads |
|
||||
| `synchronous` | `NORMAL` | fsync on WAL checkpoints only (performance vs. durability trade-off) |
|
||||
| `foreign_keys` | `ON` | Enforce referential integrity |
|
||||
|
||||
### Schema and migrations
|
||||
|
||||
The schema version is tracked via `PRAGMA user_version`. On first open, `SqlStore` applies all pending migrations in order. Migrations are embedded as SQL strings at compile time.
|
||||
|
||||
Current schema version: **13**
|
||||
|
||||
| Migration | Version | Content |
|
||||
|-----------|---------|---------|
|
||||
| `001_initial.sql` | 1 | Users, key_packages, deliveries, hybrid_keys tables |
|
||||
| `002_add_seq.sql` | 3 | Delivery sequence numbers |
|
||||
| `003_channels.sql` | 4 | Channel-aware delivery queues |
|
||||
| `004_federation.sql` | 5 | Federation peer table |
|
||||
| `005_signing_key.sql` | 6 | Server signing key storage |
|
||||
| `006_kt_log.sql` | 7 | Key transparency Merkle log |
|
||||
| `007_add_expiry.sql` | 8 | TTL/expiry columns on deliveries |
|
||||
| `008_devices.sql` | 9 | Device registration table |
|
||||
| `009_sessions.sql` | 10 | Session token table |
|
||||
| `010_blobs.sql` | 11 | Blob storage table |
|
||||
| `011_recovery_bundles.sql` | 12 | Recovery bundle table |
|
||||
| `012_moderation.sql` | 13 | Reports and bans tables |
|
||||
|
||||
If the database's `user_version` is greater than `SCHEMA_VERSION`, the server refuses to open it (downgrade protection).
|
||||
|
||||
### Store trait
|
||||
|
||||
`SqlStore` implements the `Store` trait defined in `storage.rs`:
|
||||
|
||||
```rust
|
||||
pub trait Store: Send + Sync {
|
||||
fn upload_key_package(&self, identity_key: &[u8], package: Vec<u8>) -> Result<(), StorageError>;
|
||||
fn fetch_key_package(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
|
||||
fn upload_hybrid_key(&self, identity_key: &[u8], hybrid_pk: Vec<u8>) -> Result<(), StorageError>;
|
||||
fn fetch_hybrid_key(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
|
||||
fn enqueue(&self, recipient_key: &[u8], channel_id: &[u8], payload: Vec<u8>, ...) -> Result<u64, StorageError>;
|
||||
fn fetch(&self, recipient_key: &[u8], channel_id: &[u8], limit: u32, ...) -> Result<Vec<(u64, Vec<u8>)>, StorageError>;
|
||||
fn ack(&self, recipient_key: &[u8], channel_id: &[u8], seq_up_to: u64, ...) -> Result<(), StorageError>;
|
||||
fn store_session(&self, record: SessionRecord) -> Result<(), StorageError>;
|
||||
fn fetch_session(&self, token: &[u8]) -> Result<Option<SessionRecord>, StorageError>;
|
||||
// ... and more
|
||||
}
|
||||
```
|
||||
|
||||
### Key package storage
|
||||
|
||||
Key packages are stored in the `key_packages` table:
|
||||
|
||||
```sql
|
||||
CREATE TABLE key_packages (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
identity_key BLOB NOT NULL,
|
||||
package_data BLOB NOT NULL,
|
||||
created_at INTEGER NOT NULL DEFAULT (unixepoch())
|
||||
);
|
||||
```
|
||||
|
||||
`upload_key_package` inserts a row. `fetch_key_package` selects and deletes the oldest row for the given identity key in a single transaction (atomic FIFO pop). This guarantees MLS's single-use requirement.
|
||||
|
||||
### Delivery queue storage
|
||||
|
||||
Delivery messages are stored in the `deliveries` table with per-message sequence numbers:
|
||||
|
||||
```sql
|
||||
CREATE TABLE deliveries (
|
||||
seq INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
recipient BLOB NOT NULL,
|
||||
channel_id BLOB NOT NULL DEFAULT '',
|
||||
device_id BLOB NOT NULL DEFAULT '',
|
||||
payload BLOB NOT NULL,
|
||||
expires_at INTEGER, -- NULL = no expiry
|
||||
message_id BLOB -- idempotency key
|
||||
);
|
||||
```
|
||||
|
||||
`enqueue` inserts a row and returns the `seq`. `fetch` selects rows with `seq > last_ack` ordered by `seq` and returns them without deleting. `ack(seq_up_to)` deletes all rows with `seq <= seq_up_to` for the given recipient, channel, and device.
|
||||
|
||||
### Session storage
|
||||
|
||||
Sessions issued after OPAQUE login are stored in the `sessions` table:
|
||||
|
||||
```sql
|
||||
CREATE TABLE sessions (
|
||||
token BLOB NOT NULL PRIMARY KEY,
|
||||
identity BLOB NOT NULL,
|
||||
device_id BLOB,
|
||||
created_at INTEGER NOT NULL DEFAULT (unixepoch()),
|
||||
expires_at INTEGER
|
||||
);
|
||||
```
|
||||
|
||||
The `token` is the 32-byte random session token returned by `OpaqueLoginFinish`. The server validates incoming tokens by looking up this table.
|
||||
|
||||
### Error type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum StorageError {
|
||||
#[error("database error: {0}")]
|
||||
Db(String),
|
||||
#[error("serialization error")]
|
||||
Serde,
|
||||
#[error("not found")]
|
||||
NotFound,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FileBackedStore (Server-Side, Legacy)
|
||||
|
||||
`FileBackedStore` was the original server-side storage backend. It uses bincode-serialised files with in-memory `Mutex`-protected `HashMap` structures. It remains available for development and testing but `SqlStore` is the production backend.
|
||||
|
||||
### Structure
|
||||
|
||||
@@ -24,367 +155,115 @@ pub struct FileBackedStore {
|
||||
kp_path: PathBuf, // keypackages.bin
|
||||
ds_path: PathBuf, // deliveries.bin
|
||||
hk_path: PathBuf, // hybridkeys.bin
|
||||
key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>, // identity -> KP queue
|
||||
deliveries: Mutex<HashMap<ChannelKey, VecDeque<Vec<u8>>>>, // (channel, recipient) -> msg queue
|
||||
hybrid_keys: Mutex<HashMap<Vec<u8>, Vec<u8>>>, // identity -> hybrid PK
|
||||
key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>,
|
||||
deliveries: Mutex<HashMap<ChannelKey, VecDeque<Vec<u8>>>>,
|
||||
hybrid_keys: Mutex<HashMap<Vec<u8>, Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
Each domain has its own `Mutex`-protected in-memory map and its own disk file.
|
||||
The `Mutex` (not `RwLock`) is used because every read-path operation that
|
||||
modifies state (e.g., `pop_front` in `fetch_key_package`) requires exclusive
|
||||
access.
|
||||
File paths under the data directory:
|
||||
|
||||
### Initialization
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `keypackages.bin` | KeyPackage queues (bincode `QueueMapV1`) |
|
||||
| `deliveries.bin` | Delivery queues (bincode `QueueMapV2`) |
|
||||
| `hybridkeys.bin` | Hybrid public keys (bincode `HashMap`) |
|
||||
|
||||
```rust
|
||||
FileBackedStore::open(dir: impl AsRef<Path>) -> Result<Self, StorageError>
|
||||
```
|
||||
|
||||
1. Creates the directory if it does not exist.
|
||||
2. Loads each map from its respective file, or initializes an empty map if the
|
||||
file is missing.
|
||||
3. Returns the initialized store.
|
||||
|
||||
File paths:
|
||||
- `{dir}/keypackages.bin` -- KeyPackage queues
|
||||
- `{dir}/deliveries.bin` -- Delivery queues
|
||||
- `{dir}/hybridkeys.bin` -- Hybrid public keys
|
||||
|
||||
The default data directory is `data/`, configurable via `--data-dir` /
|
||||
`QPQ_DATA_DIR`.
|
||||
|
||||
### Flush-on-Every-Write
|
||||
|
||||
Every mutation serializes the entire in-memory map to disk:
|
||||
|
||||
```text
|
||||
upload_key_package(identity_key, package)
|
||||
|
|
||||
+-- lock key_packages Mutex
|
||||
|
|
||||
+-- map.entry(identity_key).or_default().push_back(package)
|
||||
|
|
||||
+-- flush_kp_map(path, &map)
|
||||
| +-- QueueMapV1 { map: map.clone() }
|
||||
| +-- bincode::serialize(&payload)
|
||||
| +-- fs::write(path, bytes)
|
||||
|
|
||||
+-- unlock Mutex
|
||||
```
|
||||
|
||||
This approach is deliberately simple and correct:
|
||||
- **Crash safety:** Every successful RPC response guarantees the data has been
|
||||
written to the filesystem.
|
||||
- **No partial writes:** The entire map is serialized atomically (though not to
|
||||
a temp file with rename -- this is an MVP trade-off).
|
||||
- **Performance:** Not suitable for production scale. Every write serializes and
|
||||
writes the full map, which is O(n) in the total number of stored entries.
|
||||
|
||||
**Production improvement path:** Replace with a proper database (SQLite, sled,
|
||||
or similar) for incremental writes, WAL-based crash safety, and concurrent
|
||||
access without full serialization.
|
||||
|
||||
### KeyPackage Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `upload_key_package(identity_key, package)` | Push to back of VecDeque; flush |
|
||||
| `fetch_key_package(identity_key)` | Pop from front (FIFO, single-use); flush |
|
||||
|
||||
The KeyPackage map uses the `QueueMapV1` serialization wrapper:
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, Deserialize, Default)]
|
||||
struct QueueMapV1 {
|
||||
map: HashMap<Vec<u8>, VecDeque<Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
### Delivery Queue Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `enqueue(recipient_key, channel_id, payload)` | Construct ChannelKey; push to back; flush |
|
||||
| `fetch(recipient_key, channel_id)` | Construct ChannelKey; drain entire VecDeque; flush |
|
||||
|
||||
The delivery map uses `QueueMapV2` with the compound `ChannelKey`:
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, Deserialize, Clone, Eq, PartialEq, Debug)]
|
||||
pub struct ChannelKey {
|
||||
pub channel_id: Vec<u8>,
|
||||
pub recipient_key: Vec<u8>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Default)]
|
||||
struct QueueMapV2 {
|
||||
map: HashMap<ChannelKey, VecDeque<Vec<u8>>>,
|
||||
}
|
||||
```
|
||||
|
||||
See [Delivery Service Internals](delivery-service.md) for the full queue model
|
||||
and channel-aware routing semantics.
|
||||
|
||||
### V1/V2 Delivery Map Migration
|
||||
|
||||
The delivery map format evolved from V1 (keyed by recipient key only) to V2
|
||||
(keyed by `ChannelKey` with channel ID + recipient key). The load function
|
||||
handles both formats transparently:
|
||||
|
||||
```rust
|
||||
fn load_delivery_map(path: &Path) -> Result<HashMap<ChannelKey, VecDeque<Vec<u8>>>> {
|
||||
let bytes = fs::read(path)?;
|
||||
|
||||
// Try V2 format first (channel-aware).
|
||||
if let Ok(map) = bincode::deserialize::<QueueMapV2>(&bytes) {
|
||||
return Ok(map.map);
|
||||
}
|
||||
|
||||
// Fallback to legacy V1 format: migrate by setting channel_id = empty.
|
||||
let legacy: QueueMapV1 = bincode::deserialize(&bytes)?;
|
||||
let mut upgraded = HashMap::new();
|
||||
for (recipient_key, queue) in legacy.map.into_iter() {
|
||||
upgraded.insert(
|
||||
ChannelKey { channel_id: Vec::new(), recipient_key },
|
||||
queue,
|
||||
);
|
||||
}
|
||||
Ok(upgraded)
|
||||
}
|
||||
```
|
||||
|
||||
Migration strategy:
|
||||
1. Attempt to deserialize as V2 (`QueueMapV2`). If successful, use as-is.
|
||||
2. If V2 fails, deserialize as V1 (`QueueMapV1`). Migrate each entry by
|
||||
wrapping the recipient key in a `ChannelKey` with an empty `channel_id`.
|
||||
3. The next flush will write V2 format, completing the migration.
|
||||
|
||||
This in-place migration is transparent to clients. Legacy messages (pre-channel
|
||||
routing) appear under the empty channel ID and can still be fetched by clients
|
||||
that pass an empty `channelId`.
|
||||
|
||||
### Hybrid Key Operations
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| `upload_hybrid_key(identity_key, hybrid_pk)` | Insert (overwrite); flush |
|
||||
| `fetch_hybrid_key(identity_key)` | Read-only lookup; no flush needed |
|
||||
|
||||
The hybrid key map is a flat `HashMap<Vec<u8>, Vec<u8>>` serialized directly
|
||||
with bincode. Unlike KeyPackages, hybrid keys are not single-use -- they persist
|
||||
until overwritten.
|
||||
|
||||
### Error Type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum StorageError {
|
||||
#[error("io error: {0}")]
|
||||
Io(String),
|
||||
#[error("serialization error")]
|
||||
Serde,
|
||||
}
|
||||
```
|
||||
|
||||
I/O errors (disk full, permission denied) and serialization errors (corrupt
|
||||
file) are the two failure modes. The server converts `StorageError` to
|
||||
`capnp::Error` via the `storage_err` helper for RPC responses.
|
||||
Every write serialises the entire map to disk (O(n) per write). No encryption: data is stored in plaintext. Not recommended for production deployments; use `SqlStore` instead.
|
||||
|
||||
---
|
||||
|
||||
## DiskKeyStore (Client-Side)
|
||||
|
||||
`DiskKeyStore` is the client-side key store that implements the openmls
|
||||
`OpenMlsKeyStore` trait. It holds MLS cryptographic key material -- most
|
||||
importantly, the HPKE init private keys created during KeyPackage generation.
|
||||
`DiskKeyStore` is the client-side key store that implements the openmls `OpenMlsKeyStore` trait. It holds MLS cryptographic key material, most importantly the HPKE init private keys created during KeyPackage generation.
|
||||
|
||||
### Structure
|
||||
|
||||
```rust
|
||||
pub struct DiskKeyStore {
|
||||
path: Option<PathBuf>, // None = ephemeral (in-memory only)
|
||||
path: Option<PathBuf>, // None = ephemeral (in-memory only)
|
||||
values: RwLock<HashMap<Vec<u8>, Vec<u8>>>, // key reference -> serialized MLS entity
|
||||
}
|
||||
```
|
||||
|
||||
The `RwLock` (not `Mutex`) allows concurrent reads. Write operations (store,
|
||||
delete) take an exclusive lock and flush to disk.
|
||||
|
||||
### Modes
|
||||
|
||||
| Mode | Constructor | Persistence |
|
||||
|------|-------------|-------------|
|
||||
| Ephemeral | `DiskKeyStore::ephemeral()` | None. Data exists only in memory. Lost on process exit. |
|
||||
| Persistent | `DiskKeyStore::persistent(path)` | Yes. Every write flushes the full map to disk. Survives process restarts. |
|
||||
| Persistent | `DiskKeyStore::persistent(path)` | Yes. Every write flushes the full map to disk. |
|
||||
|
||||
**Ephemeral mode** is used for tests and the `register` / `demo-group` CLI
|
||||
commands where session resumption is not needed.
|
||||
Persistent mode is used for production clients. The key store path is derived from the state file by changing the extension to `.ks`.
|
||||
|
||||
**Persistent mode** is used for production clients (`register-state`, `invite`,
|
||||
`join`, `send`, `recv` commands). The key store file path is derived from the
|
||||
state file path by changing the extension to `.ks`:
|
||||
### Serialisation format
|
||||
|
||||
```rust
|
||||
fn keystore_path(state_path: &Path) -> PathBuf {
|
||||
let mut path = state_path.to_path_buf();
|
||||
path.set_extension("ks");
|
||||
path
|
||||
}
|
||||
```
|
||||
MLS entities MUST use bincode serialisation. The `DiskKeyStore` implements this with a two-layer scheme:
|
||||
|
||||
So `qpq-state.bin` produces a key store at `quicproquo-state.ks`.
|
||||
1. **Inner layer:** Each MLS entity value (`V: MlsEntity`) is serialised using the openmls-required serialisation format. The `DiskKeyStore` in quicproquo uses bincode for MLS entity values, matching the `OpenMlsKeyStore` trait requirements.
|
||||
2. **Outer layer:** The entire `HashMap<Vec<u8>, Vec<u8>>` is bincode-serialised as the file on disk.
|
||||
|
||||
### Persistence Format
|
||||
|
||||
The key store is serialized as a bincode-encoded `HashMap<Vec<u8>, Vec<u8>>`.
|
||||
Individual values are serialized using `serde_json` (as required by openmls's
|
||||
`MlsEntity` trait bound):
|
||||
**Important:** Do not use Protobuf or JSON for MLS entities. MLS requires bincode for the `DiskKeyStore` in this codebase. Using a different format will produce incompatible key material.
|
||||
|
||||
```rust
|
||||
fn store<V: MlsEntity>(&self, k: &[u8], v: &V) -> Result<(), Self::Error> {
|
||||
let value = serde_json::to_vec(v)?; // MlsEntity -> JSON bytes
|
||||
let mut values = self.values.write().unwrap();
|
||||
let value = bincode::serialize(v)?; // MlsEntity -> bincode bytes
|
||||
let mut values = self.values.write()?;
|
||||
values.insert(k.to_vec(), value);
|
||||
drop(values); // release lock before I/O
|
||||
self.flush() // bincode serialize full map to disk
|
||||
drop(values);
|
||||
self.flush() // bincode-serialize full HashMap to disk
|
||||
}
|
||||
```
|
||||
|
||||
The two-layer serialization (JSON for values, bincode for the map) is a
|
||||
consequence of openmls requiring `serde_json`-compatible serialization for MLS
|
||||
entities, while the outer map uses bincode for compactness.
|
||||
### OpenMlsKeyStore implementation
|
||||
|
||||
### OpenMlsKeyStore Implementation
|
||||
|
||||
| Trait Method | DiskKeyStore Behavior |
|
||||
|--------------|-----------------------|
|
||||
| `store(k, v)` | JSON-serialize value, insert into HashMap, flush to disk |
|
||||
| `read(k)` | Look up key, JSON-deserialize value, return `Option<V>` |
|
||||
| Trait method | DiskKeyStore behaviour |
|
||||
|---|---|
|
||||
| `store(k, v)` | bincode-serialize value, insert into HashMap, flush to disk |
|
||||
| `read(k)` | Look up key, bincode-deserialize value, return `Option<V>` |
|
||||
| `delete(k)` | Remove from HashMap, flush to disk |
|
||||
|
||||
The `read` method does not flush because it does not modify the map. A failed
|
||||
deserialization (corrupt value) returns `None` rather than an error, which
|
||||
matches the openmls `OpenMlsKeyStore` trait signature.
|
||||
### StoreCrypto
|
||||
|
||||
### Flush Behavior
|
||||
|
||||
```rust
|
||||
fn flush(&self) -> Result<(), DiskKeyStoreError> {
|
||||
let Some(path) = &self.path else {
|
||||
return Ok(()); // ephemeral: no-op
|
||||
};
|
||||
let values = self.values.read().unwrap();
|
||||
let bytes = bincode::serialize(&*values)?;
|
||||
fs::create_dir_all(path.parent())?; // ensure parent dir exists
|
||||
fs::write(path, bytes)?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
Like `FileBackedStore`, the flush serializes the entire map on every write.
|
||||
For client-side usage, the map is typically small (a handful of HPKE keys), so
|
||||
this is not a performance concern.
|
||||
|
||||
### Error Type
|
||||
|
||||
```rust
|
||||
#[derive(thiserror::Error, Debug, PartialEq, Eq)]
|
||||
pub enum DiskKeyStoreError {
|
||||
#[error("serialization error")]
|
||||
Serialization,
|
||||
#[error("io error: {0}")]
|
||||
Io(String),
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## StoreCrypto
|
||||
|
||||
`StoreCrypto` is a composite type that bundles a `DiskKeyStore` with the
|
||||
`RustCrypto` provider from `openmls_rust_crypto`. It implements the openmls
|
||||
`OpenMlsCryptoProvider` trait, which is the single entry point that openmls
|
||||
uses for all cryptographic operations:
|
||||
`StoreCrypto` bundles `DiskKeyStore` with the `RustCrypto` provider:
|
||||
|
||||
```rust
|
||||
pub struct StoreCrypto {
|
||||
crypto: RustCrypto, // AES-GCM, SHA-256, X25519, Ed25519, etc.
|
||||
key_store: DiskKeyStore, // HPKE init keys, MLS epoch secrets, etc.
|
||||
}
|
||||
|
||||
impl OpenMlsCryptoProvider for StoreCrypto {
|
||||
type CryptoProvider = RustCrypto;
|
||||
type RandProvider = RustCrypto;
|
||||
type KeyStoreProvider = DiskKeyStore;
|
||||
|
||||
fn crypto() -> &RustCrypto { &self.crypto }
|
||||
fn rand() -> &RustCrypto { &self.crypto }
|
||||
fn key_store() -> &DiskKeyStore { &self.key_store }
|
||||
crypto: RustCrypto, // AES-GCM, SHA-256, X25519, Ed25519
|
||||
key_store: DiskKeyStore, // HPKE init keys, MLS epoch secrets
|
||||
}
|
||||
```
|
||||
|
||||
`StoreCrypto` is the `backend` field of [`GroupMember`](group-member-lifecycle.md).
|
||||
It is passed to every openmls operation -- `KeyPackage::builder().build()`,
|
||||
`MlsGroup::new_with_group_id()`, `MlsGroup::new_from_welcome()`,
|
||||
`create_message()`, `process_message()`, etc.
|
||||
|
||||
The critical property is that the **same `StoreCrypto` instance** (and therefore
|
||||
the same `DiskKeyStore`) must be used from `generate_key_package()` through
|
||||
`join_group()`, because the HPKE init private key is stored in and read from
|
||||
this key store.
|
||||
It implements `OpenMlsCryptoProvider` and is the `backend` field of `GroupMember`. The same `StoreCrypto` instance must be used consistently from `generate_key_package()` through `join_group()`, because the HPKE init private key is written at package generation time and read at group join time.
|
||||
|
||||
---
|
||||
|
||||
## Storage Architecture Summary
|
||||
## Storage architecture summary
|
||||
|
||||
```text
|
||||
Server Client
|
||||
====== ======
|
||||
|
||||
FileBackedStore DiskKeyStore
|
||||
+-- key_packages (Mutex<HashMap>) +-- values (RwLock<HashMap>)
|
||||
| Persisted: keypackages.bin | Persisted: {state}.ks
|
||||
| Format: bincode(QueueMapV1) | Format: bincode(HashMap)
|
||||
| | Values: serde_json(MlsEntity)
|
||||
+-- deliveries (Mutex<HashMap>) |
|
||||
| Persisted: deliveries.bin +-- Wrapped by StoreCrypto
|
||||
| Format: bincode(QueueMapV2) | implements OpenMlsCryptoProvider
|
||||
| Migration: V1 -> V2 on load |
|
||||
SqlStore (production) DiskKeyStore
|
||||
+-- SQLCipher-encrypted SQLite +-- values (RwLock<HashMap>)
|
||||
| WAL mode, pool_size=4 | Persisted: {state}.ks
|
||||
| Key: Argon2id(passphrase, salt) | Format: bincode(HashMap<Vec<u8>, Vec<u8>>)
|
||||
| Schema: 13 migrations | Values: bincode(MlsEntity)
|
||||
| Tables: users, key_packages, |
|
||||
| deliveries, sessions, blobs, +-- Wrapped by StoreCrypto
|
||||
| devices, kt_log, recovery_bundles, | implements OpenMlsCryptoProvider
|
||||
| reports, banned_users, ... |
|
||||
| +-- Used by GroupMember.backend
|
||||
+-- hybrid_keys (Mutex<HashMap>)
|
||||
Persisted: hybridkeys.bin
|
||||
Format: bincode(HashMap)
|
||||
FileBackedStore (legacy / dev)
|
||||
+-- keypackages.bin (bincode)
|
||||
+-- deliveries.bin (bincode)
|
||||
+-- hybridkeys.bin (bincode)
|
||||
No encryption. Not for production.
|
||||
```
|
||||
|
||||
### Shared Design Patterns
|
||||
|
||||
Both backends share these characteristics:
|
||||
|
||||
1. **Full-map serialization.** Every write serializes the entire map to disk.
|
||||
Simple, correct, but O(n) per write.
|
||||
|
||||
2. **Bincode format.** The outer map is always bincode-serialized. Compact and
|
||||
fast, but not human-readable and not forward-compatible without wrapper
|
||||
structs.
|
||||
|
||||
3. **No WAL / journaling.** A crash during `fs::write` could leave a corrupt
|
||||
file. For the MVP, this is acceptable -- the data can be regenerated (clients
|
||||
re-upload KeyPackages; delivery messages are ephemeral).
|
||||
|
||||
4. **No compaction.** Empty queues are not removed from the map. Over time, the
|
||||
serialized size can grow with stale entries. A production implementation
|
||||
should periodically compact empty entries.
|
||||
|
||||
5. **Directory creation.** Both backends call `fs::create_dir_all` before
|
||||
writing, ensuring parent directories exist.
|
||||
|
||||
---
|
||||
|
||||
## Related Pages
|
||||
## Related pages
|
||||
|
||||
- [GroupMember Lifecycle](group-member-lifecycle.md) -- how `StoreCrypto` and `DiskKeyStore` are used during MLS operations
|
||||
- [KeyPackage Exchange Flow](keypackage-exchange.md) -- upload and fetch through `FileBackedStore`
|
||||
- [Delivery Service Internals](delivery-service.md) -- delivery queue operations
|
||||
- [Authentication Service Internals](authentication-service.md) -- KeyPackage and hybrid key storage
|
||||
- [Key Lifecycle and Zeroization](../cryptography/key-lifecycle.md) -- how HPKE keys are created and destroyed
|
||||
- [Authentication Service Internals](authentication-service.md) -- KeyPackage and session storage
|
||||
- [Wire Format Overview](../wire-format/overview.md) -- frame format and transport
|
||||
- [Method ID Reference](../wire-format/envelope-schema.md) -- RPC method IDs
|
||||
|
||||
Reference in New Issue
Block a user