Files
quicproquo/docs/src/internals/storage-backend.md
Christian Nennemann 2e081ead8e chore: rename quicproquo → quicprochat in docs, Docker, CI, and packaging
Rename all project references from quicproquo/qpq to quicprochat/qpc
across documentation, Docker configuration, CI workflows, packaging
scripts, operational configs, and build tooling.

- Docker: crate paths, binary names, user/group, data dirs, env vars
- CI: workflow crate references, binary names, artifact names
- Docs: all markdown files under docs/, SDK READMEs, book.toml
- Packaging: OpenWrt Makefile, init script, UCI config (file renames)
- Scripts: justfile, dev-shell, screenshot, cross-compile, ai_team
- Operations: Prometheus config, alert rules, Grafana dashboard
- Config: .env.example (QPQ_* → QPC_*), CODEOWNERS paths
- Top-level: README, CONTRIBUTING, ROADMAP, CLAUDE.md
2026-03-21 19:14:06 +01:00

11 KiB

Storage Backend

quicprochat uses two storage backends: SqlStore on the server side (SQLCipher-encrypted SQLite with Argon2id key derivation) and DiskKeyStore on the client side (bincode-serialised file for MLS cryptographic key material).

Sources:

  • crates/quicprochat-server/src/sql_store.rs (SqlStore)
  • crates/quicprochat-server/src/storage.rs (Store trait, FileBackedStore legacy)
  • crates/quicprochat-core/src/keystore.rs (DiskKeyStore, StoreCrypto)

SqlStore (Server-Side)

SqlStore is the primary server-side storage backend. It wraps SQLCipher (SQLite with AES-256 encryption) via the rusqlite crate and provides a connection pool for concurrent access.

Encryption

The database file is encrypted with SQLCipher using a key derived from a server-supplied passphrase. The key is passed as the SQLCipher PRAGMA key on connection open. Key derivation uses Argon2id: the server generates a random salt on first start and derives the 32-byte SQLCipher key material from the passphrase using Argon2id with server-configured parameters.

The database file is opaque without the key; an attacker with filesystem access cannot read any stored data without also compromising the server's key material.

Connection pool

pub struct SqlStore {
    pool: Vec<Mutex<Connection>>,  // default pool_size = 4
}

SqlStore maintains a fixed pool of SQLCipher connections (default: 4). Each request acquires a connection via try_lock() on each pool slot (non-blocking fast path), falling back to blocking on the first connection if all are busy. WAL journal mode allows concurrent readers; writers are serialised by SQLite's locking protocol.

PRAGMA settings applied to every connection:

PRAGMA Value Effect
journal_mode WAL Write-ahead logging for concurrent reads
synchronous NORMAL fsync on WAL checkpoints only (performance vs. durability trade-off)
foreign_keys ON Enforce referential integrity

Schema and migrations

The schema version is tracked via PRAGMA user_version. On first open, SqlStore applies all pending migrations in order. Migrations are embedded as SQL strings at compile time.

Current schema version: 13

Migration Version Content
001_initial.sql 1 Users, key_packages, deliveries, hybrid_keys tables
002_add_seq.sql 3 Delivery sequence numbers
003_channels.sql 4 Channel-aware delivery queues
004_federation.sql 5 Federation peer table
005_signing_key.sql 6 Server signing key storage
006_kt_log.sql 7 Key transparency Merkle log
007_add_expiry.sql 8 TTL/expiry columns on deliveries
008_devices.sql 9 Device registration table
009_sessions.sql 10 Session token table
010_blobs.sql 11 Blob storage table
011_recovery_bundles.sql 12 Recovery bundle table
012_moderation.sql 13 Reports and bans tables

If the database's user_version is greater than SCHEMA_VERSION, the server refuses to open it (downgrade protection).

Store trait

SqlStore implements the Store trait defined in storage.rs:

pub trait Store: Send + Sync {
    fn upload_key_package(&self, identity_key: &[u8], package: Vec<u8>) -> Result<(), StorageError>;
    fn fetch_key_package(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
    fn upload_hybrid_key(&self, identity_key: &[u8], hybrid_pk: Vec<u8>) -> Result<(), StorageError>;
    fn fetch_hybrid_key(&self, identity_key: &[u8]) -> Result<Option<Vec<u8>>, StorageError>;
    fn enqueue(&self, recipient_key: &[u8], channel_id: &[u8], payload: Vec<u8>, ...) -> Result<u64, StorageError>;
    fn fetch(&self, recipient_key: &[u8], channel_id: &[u8], limit: u32, ...) -> Result<Vec<(u64, Vec<u8>)>, StorageError>;
    fn ack(&self, recipient_key: &[u8], channel_id: &[u8], seq_up_to: u64, ...) -> Result<(), StorageError>;
    fn store_session(&self, record: SessionRecord) -> Result<(), StorageError>;
    fn fetch_session(&self, token: &[u8]) -> Result<Option<SessionRecord>, StorageError>;
    // ... and more
}

Key package storage

Key packages are stored in the key_packages table:

CREATE TABLE key_packages (
    id             INTEGER PRIMARY KEY AUTOINCREMENT,
    identity_key   BLOB NOT NULL,
    package_data   BLOB NOT NULL,
    created_at     INTEGER NOT NULL DEFAULT (unixepoch())
);

upload_key_package inserts a row. fetch_key_package selects and deletes the oldest row for the given identity key in a single transaction (atomic FIFO pop). This guarantees MLS's single-use requirement.

Delivery queue storage

Delivery messages are stored in the deliveries table with per-message sequence numbers:

CREATE TABLE deliveries (
    seq          INTEGER PRIMARY KEY AUTOINCREMENT,
    recipient    BLOB NOT NULL,
    channel_id   BLOB NOT NULL DEFAULT '',
    device_id    BLOB NOT NULL DEFAULT '',
    payload      BLOB NOT NULL,
    expires_at   INTEGER,  -- NULL = no expiry
    message_id   BLOB      -- idempotency key
);

enqueue inserts a row and returns the seq. fetch selects rows with seq > last_ack ordered by seq and returns them without deleting. ack(seq_up_to) deletes all rows with seq <= seq_up_to for the given recipient, channel, and device.

Session storage

Sessions issued after OPAQUE login are stored in the sessions table:

CREATE TABLE sessions (
    token       BLOB NOT NULL PRIMARY KEY,
    identity    BLOB NOT NULL,
    device_id   BLOB,
    created_at  INTEGER NOT NULL DEFAULT (unixepoch()),
    expires_at  INTEGER
);

The token is the 32-byte random session token returned by OpaqueLoginFinish. The server validates incoming tokens by looking up this table.

Error type

#[derive(thiserror::Error, Debug)]
pub enum StorageError {
    #[error("database error: {0}")]
    Db(String),
    #[error("serialization error")]
    Serde,
    #[error("not found")]
    NotFound,
}

FileBackedStore (Server-Side, Legacy)

FileBackedStore was the original server-side storage backend. It uses bincode-serialised files with in-memory Mutex-protected HashMap structures. It remains available for development and testing but SqlStore is the production backend.

Structure

pub struct FileBackedStore {
    kp_path:      PathBuf,                                      // keypackages.bin
    ds_path:      PathBuf,                                      // deliveries.bin
    hk_path:      PathBuf,                                      // hybridkeys.bin
    key_packages: Mutex<HashMap<Vec<u8>, VecDeque<Vec<u8>>>>,
    deliveries:   Mutex<HashMap<ChannelKey, VecDeque<Vec<u8>>>>,
    hybrid_keys:  Mutex<HashMap<Vec<u8>, Vec<u8>>>,
}

File paths under the data directory:

File Contents
keypackages.bin KeyPackage queues (bincode QueueMapV1)
deliveries.bin Delivery queues (bincode QueueMapV2)
hybridkeys.bin Hybrid public keys (bincode HashMap)

Every write serialises the entire map to disk (O(n) per write). No encryption: data is stored in plaintext. Not recommended for production deployments; use SqlStore instead.


DiskKeyStore (Client-Side)

DiskKeyStore is the client-side key store that implements the openmls OpenMlsKeyStore trait. It holds MLS cryptographic key material, most importantly the HPKE init private keys created during KeyPackage generation.

Structure

pub struct DiskKeyStore {
    path:   Option<PathBuf>,                   // None = ephemeral (in-memory only)
    values: RwLock<HashMap<Vec<u8>, Vec<u8>>>, // key reference -> serialized MLS entity
}

Modes

Mode Constructor Persistence
Ephemeral DiskKeyStore::ephemeral() None. Data exists only in memory. Lost on process exit.
Persistent DiskKeyStore::persistent(path) Yes. Every write flushes the full map to disk.

Persistent mode is used for production clients. The key store path is derived from the state file by changing the extension to .ks.

Serialisation format

MLS entities MUST use bincode serialisation. The DiskKeyStore implements this with a two-layer scheme:

  1. Inner layer: Each MLS entity value (V: MlsEntity) is serialised using the openmls-required serialisation format. The DiskKeyStore in quicprochat uses bincode for MLS entity values, matching the OpenMlsKeyStore trait requirements.
  2. Outer layer: The entire HashMap<Vec<u8>, Vec<u8>> is bincode-serialised as the file on disk.

Important: Do not use Protobuf or JSON for MLS entities. MLS requires bincode for the DiskKeyStore in this codebase. Using a different format will produce incompatible key material.

fn store<V: MlsEntity>(&self, k: &[u8], v: &V) -> Result<(), Self::Error> {
    let value = bincode::serialize(v)?;  // MlsEntity -> bincode bytes
    let mut values = self.values.write()?;
    values.insert(k.to_vec(), value);
    drop(values);
    self.flush()                         // bincode-serialize full HashMap to disk
}

OpenMlsKeyStore implementation

Trait method DiskKeyStore behaviour
store(k, v) bincode-serialize value, insert into HashMap, flush to disk
read(k) Look up key, bincode-deserialize value, return Option<V>
delete(k) Remove from HashMap, flush to disk

StoreCrypto

StoreCrypto bundles DiskKeyStore with the RustCrypto provider:

pub struct StoreCrypto {
    crypto:    RustCrypto,   // AES-GCM, SHA-256, X25519, Ed25519
    key_store: DiskKeyStore, // HPKE init keys, MLS epoch secrets
}

It implements OpenMlsCryptoProvider and is the backend field of GroupMember. The same StoreCrypto instance must be used consistently from generate_key_package() through join_group(), because the HPKE init private key is written at package generation time and read at group join time.


Storage architecture summary

Server                                    Client
======                                    ======

SqlStore (production)                     DiskKeyStore
+-- SQLCipher-encrypted SQLite            +-- values (RwLock<HashMap>)
|   WAL mode, pool_size=4                 |   Persisted: {state}.ks
|   Key: Argon2id(passphrase, salt)       |   Format: bincode(HashMap<Vec<u8>, Vec<u8>>)
|   Schema: 13 migrations                 |   Values: bincode(MlsEntity)
|   Tables: users, key_packages,          |
|   deliveries, sessions, blobs,          +-- Wrapped by StoreCrypto
|   devices, kt_log, recovery_bundles,    |   implements OpenMlsCryptoProvider
|   reports, banned_users, ...            |
|                                         +-- Used by GroupMember.backend
FileBackedStore (legacy / dev)
+-- keypackages.bin (bincode)
+-- deliveries.bin (bincode)
+-- hybridkeys.bin (bincode)
    No encryption. Not for production.