feat: add delivery sequence numbers + major server/client refactor

Delivery sequence numbers (MLS epoch ordering fix):
- schemas/node.capnp: add Envelope{seq,data} struct; enqueue returns seq:UInt64;
  fetch/fetchWait return List(Envelope) instead of List(Data)
- storage.rs: Store trait enqueue returns u64; fetch/fetch_limited return
  Vec<(u64, Vec<u8>)>; FileBackedStore gains QueueMapV3 with per-inbox seq
  counters and V2→V3 on-disk migration
- migrations/002_add_seq.sql: seq column, delivery_seq_counters table, index
- sql_store.rs: atomic UPSERT counter via RETURNING, ORDER BY seq, SCHEMA_VERSION→3
- node_service/delivery.rs: builds Envelope list; returns seq from enqueue
- client/rpc.rs: enqueue→u64, fetch_all/fetch_wait→Vec<(u64,Vec<u8>)>
- client/commands.rs: sort-by-seq before MLS processing; retry loop in cmd_recv
  and receive_pending_plaintexts for correct epoch ordering

Server refactor:
- Split monolithic main.rs into node_service/{mod,delivery,auth_ops,key_ops,p2p_ops}
- Add auth.rs (token validation, rate limiting), config.rs, metrics.rs, tls.rs
- Add SQL migrations runner (001_initial.sql, 002_add_seq.sql)
- OPAQUE PAKE login/registration, sealed-sender mode, queue depth limit (1000)

Client refactor:
- Split lib.rs into client/{commands,rpc,state,retry,hex,mod}
- Add cmd_whoami, cmd_health, cmd_check_key, cmd_ping subcommands
- Add cmd_register_user, cmd_login (OPAQUE), cmd_refresh_keypackage
- Hybrid PQ envelope (X25519 + ML-KEM-768) on all send/recv paths
- E2E test suite expanded

Other:
- quicnprotochat-gui: Tauri 2 desktop GUI skeleton (backend + HTML UI)
- quicnprotochat-p2p: iroh-based P2P transport stub
- quicnprotochat-core: app_message, hybrid_crypto modules; GroupMember API updates
- .github/workflows/size-lint.yml: binary size regression check
- docs: protocol comparison, roadmap updates, fully-operational checklist
This commit is contained in:
2026-02-22 20:40:12 +01:00
parent b5b361e2ff
commit 6b8b61c6ae
56 changed files with 10693 additions and 3024 deletions

View File

@@ -0,0 +1,22 @@
[package]
name = "quicnprotochat-gui"
version = "0.1.0"
edition = "2021"
description = "Native GUI for quicnprotochat (Tauri 2)."
license = "MIT"
[[bin]]
name = "quicnprotochat-gui"
path = "src/main.rs"
[dependencies]
quicnprotochat-core = { path = "../quicnprotochat-core" }
quicnprotochat-client = { path = "../quicnprotochat-client" }
quicnprotochat-proto = { path = "../quicnprotochat-proto" }
tauri = { version = "2", features = [] }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
[build-dependencies]
tauri-build = "2"

View File

@@ -0,0 +1,32 @@
# quicnprotochat-gui
Native GUI for quicnprotochat using [Tauri 2](https://v2.tauri.app/). The UI runs in a webview; all server-facing work (capnp-rpc, `node_service::Client`) runs on a **dedicated backend thread** with a tokio `LocalSet`, since that code is `!Send`.
## Backend threading model
- A single **backend thread** runs a tokio `LocalSet` and a request-response loop.
- The UI thread sends commands over an `mpsc` channel: `Whoami { state_path, password }` or `Health { server, ca_cert, server_name }`.
- For each request, the backend runs sync code (whoami) or `LocalSet::run_until(async { ... })` (health). It then sends `Result<String, String>` back on the provided reply channel.
- Tauri commands (`whoami`, `health`) block on that reply so the frontend gets a simple async-style result.
## How to run
From the workspace root:
```bash
cargo run -p quicnprotochat-gui
```
**Linux:** Tauri uses GTK. Install development packages if the build fails, e.g.:
- Debian/Ubuntu: `sudo apt install libgtk-3-dev libwebkit2gtk-4.1-dev`
- Fedora: `sudo dnf install gtk3-devel webkit2gtk4.1-devel`
## Frontend
The frontend is static HTML in `ui/index.html` (no npm or build step). It provides:
- **Whoami** state path (and optional password); calls `whoami` and shows JSON (identity_key, fingerprint, etc.).
- **Health** server address; calls `health` and shows server status and RTT JSON.
Default CA cert and server name for health are the same as the CLI (`data/server-cert.der`, `localhost`) unless overridden via optional params.

View File

@@ -0,0 +1,3 @@
fn main() {
tauri_build::build()
}

View File

@@ -0,0 +1,11 @@
{
"$schema": "https://schema.tauri.app/config/2/capability",
"identifier": "default",
"description": "Capability for the main window (custom commands whoami, health are allowed by default)",
"windows": ["main"],
"permissions": [
"core:default",
"core:window:allow-close",
"core:window:allow-set-title"
]
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"default":{"identifier":"default","description":"Capability for the main window (custom commands whoami, health are allowed by default)","local":true,"windows":["main"],"permissions":["core:default","core:window:allow-close","core:window:allow-set-title"]}}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@@ -0,0 +1,86 @@
//! Backend service running on a dedicated thread with a tokio LocalSet.
//!
//! All server-facing work (capnp-rpc, node_service::Client) is !Send and must run on this
//! single thread. The UI thread sends commands over a channel; this thread runs
//! `LocalSet::run_until` for each request and sends the result back.
use std::path::PathBuf;
use std::sync::mpsc;
use std::thread;
use tokio::runtime::Builder;
use tokio::task::LocalSet;
use quicnprotochat_client::{cmd_health_json, whoami_json};
/// Commands the UI can send to the backend thread.
pub enum BackendCommand {
Whoami {
state_path: String,
password: Option<String>,
},
Health {
server: String,
ca_cert: PathBuf,
server_name: String,
},
}
/// Response sent back to the UI.
pub type BackendResponse = Result<String, String>;
/// Spawn the backend thread and return a sender to post commands and a join handle.
/// The backend runs a tokio LocalSet and processes one command at a time:
/// for each received command it runs `LocalSet::run_until(future)` (for async commands)
/// or runs sync code (whoami), then sends the result on the provided reply channel.
pub fn spawn_backend() -> (mpsc::Sender<(BackendCommand, mpsc::Sender<BackendResponse>)>, thread::JoinHandle<()>) {
let (tx, rx) = mpsc::channel::<(BackendCommand, mpsc::Sender<BackendResponse>)>();
let handle = thread::spawn(move || {
let rt = Builder::new_current_thread()
.enable_all()
.build()
.expect("backend tokio runtime");
let local = LocalSet::new();
while let Ok((cmd, reply_tx)) = rx.recv() {
let result = run_command(&local, &rt, cmd);
let _ = reply_tx.send(result);
}
});
(tx, handle)
}
fn run_command(
local: &LocalSet,
rt: &tokio::runtime::Runtime,
cmd: BackendCommand,
) -> BackendResponse {
match cmd {
BackendCommand::Whoami { state_path, password } => {
let path = PathBuf::from(&state_path);
whoami_json(&path, password.as_deref()).map_err(|e| e.to_string())
}
BackendCommand::Health {
server,
ca_cert,
server_name,
} => {
// Request-response: we run LocalSet::run_until for this single request so capnp-rpc
// and connect_node stay on this thread (!Send).
let fut = cmd_health_json(&server, &ca_cert, &server_name);
rt.block_on(local.run_until(fut)).map_err(|e| e.to_string())
}
}
}
/// Default CA cert path (relative to cwd or absolute); same default as CLI.
pub fn default_ca_cert() -> PathBuf {
PathBuf::from("data/server-cert.der")
}
/// Default TLS server name.
pub fn default_server_name() -> String {
"localhost".to_string()
}

View File

@@ -0,0 +1,76 @@
//! quicnprotochat native GUI (Tauri 2).
//!
//! The backend runs on a dedicated thread with a tokio LocalSet; all server-facing
//! work (capnp-rpc, node_service::Client) is dispatched there. Tauri commands
//! block on the request-response channel until the backend returns.
mod backend;
use std::path::PathBuf;
use std::sync::mpsc;
use backend::{spawn_backend, BackendCommand};
/// Shared state: sender to the backend thread.
struct BackendState {
tx: mpsc::Sender<(BackendCommand, mpsc::Sender<backend::BackendResponse>)>,
}
/// Runs whoami on the backend thread and returns JSON string (identity_key, fingerprint, etc.).
#[tauri::command]
fn whoami(
state: tauri::State<BackendState>,
state_path: String,
password: Option<String>,
) -> Result<String, String> {
let (reply_tx, reply_rx) = mpsc::channel();
state
.tx
.send((
BackendCommand::Whoami {
state_path,
password,
},
reply_tx,
))
.map_err(|e| e.to_string())?;
reply_rx.recv().map_err(|e| e.to_string())?
}
/// Runs health check on the backend thread (LocalSet::run_until) and returns status JSON.
#[tauri::command]
fn health(
state: tauri::State<BackendState>,
server: String,
ca_cert: Option<String>,
server_name: Option<String>,
) -> Result<String, String> {
let ca_cert = ca_cert
.map(PathBuf::from)
.unwrap_or_else(backend::default_ca_cert);
let server_name = server_name.unwrap_or_else(backend::default_server_name);
let (reply_tx, reply_rx) = mpsc::channel();
state
.tx
.send((
BackendCommand::Health {
server,
ca_cert,
server_name,
},
reply_tx,
))
.map_err(|e| e.to_string())?;
reply_rx.recv().map_err(|e| e.to_string())?
}
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
let (backend_tx, _backend_handle) = spawn_backend();
tauri::Builder::default()
.manage(BackendState { tx: backend_tx })
.invoke_handler(tauri::generate_handler![whoami, health])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}

View File

@@ -0,0 +1,5 @@
//! Desktop entry point for quicnprotochat-gui.
fn main() {
quicnprotochat_gui::run()
}

View File

@@ -0,0 +1,24 @@
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "quicnprotochat-gui",
"identifier": "chat.quicnproto.gui",
"build": {
"frontendDist": "./ui",
"beforeBuildCommand": "",
"beforeDevCommand": ""
},
"app": {
"windows": [
{
"title": "quicnprotochat",
"width": 640,
"height": 480
}
],
"security": {
"csp": null
}
},
"bundle": {},
"plugins": {}
}

View File

@@ -0,0 +1,54 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>quicnprotochat</title>
<style>
body { font-family: system-ui, sans-serif; margin: 1rem; }
button { margin: 0.25rem; padding: 0.5rem 1rem; cursor: pointer; }
#output { white-space: pre-wrap; background: #f0f0f0; padding: 0.75rem; margin-top: 1rem; min-height: 4rem; border-radius: 4px; }
.error { color: #c00; }
</style>
</head>
<body>
<h1>quicnprotochat</h1>
<p>
<button id="whoami">Whoami</button>
<button id="health">Health</button>
</p>
<label>State path: <input id="statePath" type="text" value="quicnprotochat-state.bin" size="32" /></label>
<br />
<label>Server: <input id="server" type="text" value="127.0.0.1:7000" size="24" /></label>
<div id="output">Click Whoami or Health. Results appear here.</div>
<script>
const output = document.getElementById('output');
const statePath = document.getElementById('statePath');
const server = document.getElementById('server');
function show(result, isError = false) {
output.textContent = result;
output.className = isError ? 'error' : '';
}
const invoke = window.__TAURI__?.core?.invoke;
if (!invoke) {
show('Tauri API not available (not running inside Tauri?).', true);
} else {
document.getElementById('whoami').addEventListener('click', function () {
show('Running whoami…');
invoke('whoami', { statePath: statePath.value.trim(), password: null })
.then(function (s) { show(s); })
.catch(function (e) { show(String(e), true); });
});
document.getElementById('health').addEventListener('click', function () {
show('Running health…');
invoke('health', { server: server.value.trim() })
.then(function (s) { show(s); })
.catch(function (e) { show(String(e), true); });
});
}
</script>
</body>
</html>