13 Commits

Author SHA1 Message Date
d2ad0dd21a chore: add CCC logo asset 2026-05-04 14:48:14 +00:00
9e647f37d5 docs: add FAPP research paper LaTeX sources
Add paper directory with LaTeX source, bibliography, and Makefile
for the FAPP (Federated Application Protocol) research paper.
Build artifacts are gitignored.
2026-04-12 14:16:24 +00:00
da0085f1a6 feat: add observability module and wire MeshNode run() with background tasks
Add health checks (/healthz), Prometheus metrics export (/metricsz),
and tracing spans to the P2P mesh node. MeshNode.run() starts GC and
health server as background tasks, returning a RunHandle for lifecycle
management. Health endpoint returns 503 during graceful shutdown drain.
2026-04-11 17:52:03 +02:00
95ce8898fd feat: add mesh network visualizer
- D3.js force-directed graph for real-time mesh visualization
- WebSocket server (mesh-viz-bridge crate) for live updates
- Demo mode with simulated topology
- JSONL file upload for offline analysis
- Optional viz logging in mesh_node forwarding
2026-04-06 21:43:28 +02:00
99d36679c8 docs: add CLAUDE.md, unignore from .gitignore 2026-04-06 16:57:43 +02:00
a856f9bb53 feat: wire traffic resistance, implement v2 CLI commands, add auth expiry detection
Server:
- Wire traffic resistance decoy generator into main.rs startup behind
  --traffic-resistance flag + --decoy-interval-ms config (feature-gated)

Client:
- Implement v2 CLI one-shot commands: send, recv, dm, group create, group invite
  All previously printed "coming soon" — now fully functional with MLS state
  restoration, peer resolution, KeyPackage fetch, and MLS encryption pipeline

SDK:
- Add SdkError::SessionExpired variant + is_auth_expired() helper for
  detecting expired session tokens (RpcStatus::Unauthorized)
- Add ClientEvent::AuthExpired for UI-layer session expiry notification
2026-04-05 00:03:12 +02:00
f58ce2529d feat: add 11 features and bug fixes across server, SDK, and client
Server fixes:
- Wire v2 moderation handlers to ModerationService (SQL persistence) —
  bans now survive restarts instead of living in-memory DashMap
- Add admin role enforcement via QPC_ADMIN_KEYS env var for ban/unban
- Fix audit.rs now_iso8601() to emit actual ISO-8601 timestamps
- Add group admin authorization — only creator can remove members or
  update metadata

Server features:
- Add DeleteBlob RPC (method 602) with filesystem cleanup
- Register delete_blob in v2 handler method registry

SDK features:
- Add ClientEvent::IdentityKeyChanged for safety number change alerts
- Add ClientEvent::ReadReceipt and DeliveryConfirmation variants
- Add peer_identity_keys table with store/get methods for key tracking
- Add search_messages() full-text search across all conversations
- Add delete_conversation() with cascading message/outbox cleanup

Client features:
- Wire v2 TUI message sending to SDK MLS encryption pipeline
- Add /search command to v2 REPL with cross-conversation results
- Add /delete-conversation command to v2 REPL
- Add unread count badges in v1 TUI sidebar (yellow+bold styling)
2026-04-04 23:31:37 +02:00
4dadd01c6b feat: add E2E encryption module to meshservice
X25519 key agreement + HKDF-SHA256 + ChaCha20-Poly1305 AEAD for
opt-in payload encryption. Each message uses a fresh ephemeral key
for forward secrecy. 11 new tests cover roundtrip, wrong-key
rejection, tampering, wire format integration, and edge cases.
2026-04-03 10:48:16 +02:00
fb6b80c81c feat: wire FAPP message handling into mesh router
When a MeshEnvelope is delivered locally and its payload starts with a
known FAPP wire tag (0x01-0x05), MeshNode.process_incoming now delegates
to FappRouter instead of returning a raw Deliver action. Nodes without
FAPP capabilities still receive FAPP-tagged payloads as normal Deliver
actions, preserving backward compatibility.

Adds IncomingAction::Fapp variant, is_fapp_payload() helper, and three
integration tests covering the routing, passthrough, and no-router cases.
2026-04-03 07:44:19 +02:00
8eba12170e feat: integrate meshservice crate into workspace
- Add meshservice to workspace members
- Fix quicprochat-client: add MeshTrace/MeshStats slash commands
- Add integration test: meshservice_tcp_transport
- Document integration points in README and docs/status.md
- Verify shared identity (IdentityKeypair → MeshAddress)
2026-04-01 18:56:25 +02:00
a3023ecac1 docs: update status with MeshNode integration 2026-04-01 18:46:01 +02:00
150f30b0d6 feat(p2p): add MeshNode integrating all production modules
New mesh_node.rs providing a production-ready node:
- MeshNodeBuilder for fluent configuration
- MeshConfig integration for all settings
- MeshMetrics tracking for all operations
- Rate limiting on incoming messages
- Backpressure controller
- Graceful shutdown via ShutdownCoordinator
- Optional FappRouter based on capabilities
- MeshRouter for envelope routing
- TransportManager for multi-transport support

Key APIs:
- MeshNodeBuilder::new().fapp_relay().build()
- node.process_incoming() with rate limiting + metrics
- node.gc() for store/routing table cleanup
- node.shutdown() for graceful termination

222 tests passing (203 lib + 3 fapp_flow + 16 multi_node)
2026-04-01 18:45:41 +02:00
a60767a7eb docs: update status with FAPP E2E flow completion 2026-04-01 16:36:41 +02:00
58 changed files with 8695 additions and 175 deletions

9
.gitignore vendored
View File

@@ -24,6 +24,13 @@ qpc-server.toml
docs/internal/ docs/internal/
# AI development workflow files # AI development workflow files
CLAUDE.md
master-prompt.md master-prompt.md
scripts/ai_team.py scripts/ai_team.py
# LaTeX build artifacts
paper/*.aux
paper/*.bbl
paper/*.blg
paper/*.log
paper/*.out
paper/*.pdf

63
CLAUDE.md Normal file
View File

@@ -0,0 +1,63 @@
# product.quicproquo
End-to-end encrypted group messaging over QUIC with MLS key agreement and post-quantum crypto.
## Tech Stack
- Rust 1.75+, Cargo workspace (12 crates)
- Crypto: OpenMLS 0.8, ML-KEM-768, X25519, ChaCha20-Poly1305, OPAQUE-KE
- Networking: Quinn (QUIC), Tokio, Tower middleware
- Serialization: Protobuf (prost) for v2, Cap'n Proto (legacy v1)
- DB: rusqlite with bundled SQLCipher
- Build: just (justfile), cargo-deny for supply chain audit
## Commands
```bash
just build # Build all workspace crates
just test # Run all tests
just test-core # Crypto tests only
just lint # clippy --workspace -- -D warnings
just fmt # Format check
just fmt-fix # Format fix
just proto # Rebuild protobuf codegen
just server # Build server binary
just client # Build client binary
cargo deny check # Supply chain audit (deny.toml)
```
## Architecture
```
crates/
quicprochat-core/ # Crypto primitives, MLS, double ratchet
quicprochat-proto/ # Protobuf definitions + prost codegen
quicprochat-rpc/ # RPC framework over QUIC
quicprochat-sdk/ # High-level client SDK
quicprochat-server/ # Server binary
quicprochat-client/ # CLI client binary
quicprochat-p2p/ # P2P mesh via iroh (feature-gated: `mesh`)
quicprochat-plugin-api/ # Plugin interface
quicprochat-kt/ # Kotlin/JNI bindings
meshservice/ # Generic decentralized service layer (FAPP, Housing)
apps/gui/ # GUI application
proto/ # .proto source files
schemas/ # Data schemas
docker/ # Container configs
```
## Rules
- `clippy::unwrap_used` is **deny** workspace-wide -- use proper error handling
- `unsafe_code` is **warn** -- avoid unless absolutely necessary, document why
- P2P crate (`quicprochat-p2p`) pulls ~90 extra deps via iroh -- only compiled with `mesh` feature
- All crypto operations must go through quicprochat-core, never inline crypto
- Protobuf is the v2 wire format; Cap'n Proto is legacy v1 only
## Do NOT
- Use `.unwrap()` or `.expect()` outside tests -- clippy will deny it
- Add crypto primitives outside of quicprochat-core
- Enable the `mesh` feature by default (heavy dependency tree)
- Mix v1 (capnp) and v2 (protobuf) serialization in new code
- Skip `cargo deny check` before adding new dependencies

30
Cargo.lock generated
View File

@@ -3202,6 +3202,35 @@ version = "2.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79" checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79"
[[package]]
name = "mesh-viz-bridge"
version = "0.1.0"
dependencies = [
"anyhow",
"clap",
"futures-util",
"serde_json",
"tokio",
"tokio-tungstenite",
]
[[package]]
name = "meshservice"
version = "0.1.0"
dependencies = [
"anyhow",
"chacha20poly1305",
"ciborium",
"ed25519-dalek 2.2.0",
"hkdf",
"rand 0.8.5",
"serde",
"sha2 0.10.9",
"thiserror 1.0.69",
"tokio",
"x25519-dalek",
]
[[package]] [[package]]
name = "metrics" name = "metrics"
version = "0.22.4" version = "0.22.4"
@@ -4472,6 +4501,7 @@ dependencies = [
"hkdf", "hkdf",
"humantime-serde", "humantime-serde",
"iroh", "iroh",
"meshservice",
"quicprochat-core", "quicprochat-core",
"rand 0.8.5", "rand 0.8.5",
"serde", "serde",

View File

@@ -12,6 +12,10 @@ members = [
# P2P crate uses iroh (~90 extra deps). Only compiled when the `mesh` # P2P crate uses iroh (~90 extra deps). Only compiled when the `mesh`
# feature is enabled on quicprochat-client. # feature is enabled on quicprochat-client.
"crates/quicprochat-p2p", "crates/quicprochat-p2p",
# Generic decentralized service layer (FAPP, Housing, etc.)
"crates/meshservice",
# WebSocket bridge for viz/mesh-graph.html (tails NDJSON → browsers)
"viz/bridge",
] ]
[workspace.package] [workspace.package]

View File

@@ -84,6 +84,7 @@ quicprochat/
│ ├── quicprochat-client # CLI + REPL + TUI (Ratatui) │ ├── quicprochat-client # CLI + REPL + TUI (Ratatui)
│ ├── quicprochat-kt # Key transparency (Merkle-log, revocation) │ ├── quicprochat-kt # Key transparency (Merkle-log, revocation)
│ ├── quicprochat-p2p # iroh P2P, mesh identity, store-and-forward │ ├── quicprochat-p2p # iroh P2P, mesh identity, store-and-forward
│ ├── meshservice # Decentralized service layer (FAPP, housing, wire format)
│ ├── quicprochat-ffi # C FFI (libquicprochat_ffi.so) │ ├── quicprochat-ffi # C FFI (libquicprochat_ffi.so)
│ └── quicprochat-plugin-api # Dynamic plugin hooks (C ABI) │ └── quicprochat-plugin-api # Dynamic plugin hooks (C ABI)
├── proto/qpc/v1/ # 15 .proto schema files ├── proto/qpc/v1/ # 15 .proto schema files

BIN
assets/logo-ccc.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@@ -0,0 +1,45 @@
[package]
name = "meshservice"
version = "0.1.0"
edition = "2021"
authors = ["Chris <c@xorwell.de>"]
description = "Generic decentralized service layer for mesh networks"
license = "MIT"
repository = "https://git.xorwell.de/c/meshservice"
keywords = ["mesh", "p2p", "decentralized", "services"]
categories = ["network-programming"]
[dependencies]
# Serialization
serde = { version = "1.0", features = ["derive"] }
ciborium = "0.2"
# Crypto
ed25519-dalek = { version = "2.1", features = ["serde"] }
sha2 = "0.10"
rand = "0.8"
x25519-dalek = { version = "2.0", features = ["static_secrets"] }
chacha20poly1305 = "0.10"
hkdf = "0.12"
# Async
tokio = { version = "1.36", features = ["sync", "time"] }
# Error handling
anyhow = "1.0"
thiserror = "1.0"
[dev-dependencies]
tokio = { version = "1.36", features = ["rt-multi-thread", "macros"] }
[[example]]
name = "fapp_service"
path = "examples/fapp_service.rs"
[[example]]
name = "housing_service"
path = "examples/housing_service.rs"
[[example]]
name = "multi_service"
path = "examples/multi_service.rs"

View File

@@ -0,0 +1,233 @@
# MeshService
A generic decentralized service layer for mesh networks. Build any peer-to-peer service following the **Announce → Query → Response → Reserve** pattern.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Application Services │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ FAPP │ │ Housing │ │ Repair │ │ Custom │ ... │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ └────────────┴────────────┴────────────┘ │
│ Service Layer (this crate) │
│ ServiceMessage, ServiceRouter, Verification │
│ ─────────────────────────────────────────────────────── │
│ Mesh Layer │
│ (provided by quicprochat-p2p or other mesh impl) │
└─────────────────────────────────────────────────────────────┘
```
## QuicProChat / quicprochat-p2p
This crate lives in the **product.quicproquo** workspace. Integration with the mesh stack:
- **Ed25519 seed**: `MeshIdentity::seed_bytes()` matches `ServiceIdentity::from_secret(&seed)` (same `ed25519-dalek` derivation as `quicprochat_core::IdentityKeypair`); truncated mesh address is SHA-256(pubkey)[0..16] in both layers.
- **Example transport**: integration test `crates/quicprochat-p2p/tests/meshservice_tcp_transport.rs` sends `wire::encode(ServiceMessage)` over `TcpTransport` (length-prefixed framing). For iroh/production, embed the same bytes in `MeshEnvelope` on ALPN `quicprochat/mesh/1`.
Run the test from the repo root:
```bash
cargo test -p quicprochat-p2p --test meshservice_tcp_transport
```
## Features
- **Generic Protocol**: Any service can be built on top (therapy appointments, housing, repairs, tutoring...)
- **Ed25519 Signatures**: All messages cryptographically signed
- **Verification Framework**: Multi-level trust (self-asserted, peer-endorsed, registry-verified)
- **Efficient Wire Format**: Fixed 64-byte header + CBOR payload
- **Pluggable Handlers**: Register custom services with the router
- **Built-in Services**: FAPP (psychotherapy) and Housing included
## Quick Start
```rust
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::fapp::{FappService, SlotAnnounce, SlotQuery, Specialism, Modality},
};
// Create identity
let identity = ServiceIdentity::generate();
// Create router with FAPP service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
// Therapist announces slots
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral],
Modality::VideoCall,
"104", // Postal prefix
)
.with_slots(3)
.with_profile("https://therapists.de/dr-mueller");
let msg = meshservice::services::fapp::create_announce(&identity, &announce, 1)?;
router.handle(msg, Some(identity.public_key()))?;
// Patient queries
let query = SlotQuery::new(Specialism::CognitiveBehavioral, "104");
let query_msg = meshservice::services::fapp::create_query(&identity, &query)?;
let matches = router.query(&query_msg);
println!("Found {} therapists", matches.len());
```
## Built-in Services
### FAPP (Free Appointment Propagation Protocol)
Decentralized psychotherapy appointment discovery:
| Service ID | Purpose |
|------------|---------|
| `0x0001` | Therapist slot announcements, patient queries |
```rust
use meshservice::services::fapp::{SlotAnnounce, Specialism, Modality};
let announce = SlotAnnounce::new(
&[Specialism::TraumaFocused, Specialism::CognitiveBehavioral],
Modality::InPerson,
"104",
)
.with_slots(2)
.with_profile("https://kbv.de/123");
```
### Housing
Decentralized room/apartment sharing:
| Service ID | Purpose |
|------------|---------|
| `0x0002` | Listing announcements, seeker queries |
```rust
use meshservice::services::housing::{ListingAnnounce, ListingType, amenities};
let listing = ListingAnnounce::new(ListingType::Apartment, 65, 850, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY);
```
## Verification Framework
Three trust levels:
| Level | Description | Example |
|-------|-------------|---------|
| 0 - None | Bare announcement | Anonymous |
| 1 - Self-Asserted | Profile URL provided | Website link |
| 2 - Peer-Endorsed | Trusted peers vouch | Community rating |
| 3 - Registry-Verified | Official registry | KBV license |
```rust
use meshservice::verification::{Verification, TrustedVerifiers, VerificationLevel};
// Add trusted verifier
let mut verifiers = TrustedVerifiers::new();
verifiers.add(registry_public_key, "KBV Registry", VerificationLevel::RegistryVerified);
router.set_trusted_verifiers(verifiers);
// Require verification for announces
router.set_min_verification_level(2);
```
## Wire Protocol
64-byte fixed header for efficient parsing:
```
0-3 service_id (u32 LE)
4 message_type (u8)
5 version (u8)
6-7 flags (reserved)
8-23 message_id (16 bytes)
24-39 sender_address (16 bytes)
40-47 sequence (u64 LE)
48-49 ttl_hours (u16 LE)
50-57 timestamp (u64 LE)
58 hop_count (u8)
59 max_hops (u8)
60-63 payload_len (u32 LE)
---
64+ signature (64 bytes)
128+ payload (CBOR)
... verifications (optional CBOR)
```
## Building Custom Services
Implement `ServiceHandler`:
```rust
use meshservice::router::{ServiceHandler, ServiceAction, HandlerContext};
struct MyService;
impl ServiceHandler for MyService {
fn service_id(&self) -> u32 { 0x8001 } // Custom range
fn name(&self) -> &str { "MyService" }
fn handle(&self, message: &ServiceMessage, ctx: &HandlerContext)
-> Result<ServiceAction, ServiceError>
{
match message.message_type {
MessageType::Announce => Ok(ServiceAction::StoreAndForward),
MessageType::Query => {
// Find matches, respond...
Ok(ServiceAction::Handled)
}
_ => Ok(ServiceAction::Drop)
}
}
fn matches_query(&self, announce: &StoredMessage, query: &ServiceMessage) -> bool {
// Custom matching logic
true
}
}
```
## Service IDs
| ID | Service |
|----|---------|
| `0x0001` | FAPP (Psychotherapy) |
| `0x0002` | Housing |
| `0x0003` | Repair |
| `0x0004` | Tutoring |
| `0x0005` | Medical |
| `0x0006` | Legal |
| `0x0007` | Volunteer |
| `0x0008` | Events |
| `0x8000+` | Custom/User-defined |
## Examples
```bash
# FAPP demo (therapist + patient)
cargo run --example fapp_service
# Housing demo (landlord + seeker)
cargo run --example housing_service
# Multi-service mesh
cargo run --example multi_service
```
## Testing
```bash
cargo test
```
## License
MIT

View File

@@ -0,0 +1,86 @@
//! FAPP Service Demo
//!
//! Demonstrates therapist announcement and patient query flow.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::fapp::{create_announce, create_query, FappService, Modality, SlotAnnounce, SlotQuery, Specialism},
};
fn main() {
println!("=== FAPP Service Demo ===\n");
// Create identities
let therapist = ServiceIdentity::generate();
let patient = ServiceIdentity::generate();
let relay = ServiceIdentity::generate();
println!("Therapist address: {:?}", hex(&therapist.address()));
println!("Patient address: {:?}", hex(&patient.address()));
println!("Relay address: {:?}\n", hex(&relay.address()));
// Create router with FAPP service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
// Therapist creates announcement
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral, Specialism::TraumaFocused],
Modality::VideoCall,
"104", // Berlin Kreuzberg
)
.with_slots(3)
.with_profile("https://therapists.de/dr-schmidt")
.with_name("Dr. Anna Schmidt");
println!("Therapist announces:");
println!(" Specialisms: CBT, Trauma");
println!(" Modality: Video");
println!(" Location: 104xx");
println!(" Slots: 3");
println!(" Profile: https://therapists.de/dr-schmidt\n");
let msg = create_announce(&therapist, &announce, 1).unwrap();
let action = router.handle(msg.clone(), Some(therapist.public_key())).unwrap();
println!("Router action: {:?}", action);
println!("Stored messages: {}\n", router.store().len());
// Patient creates query
let query = SlotQuery::new(Specialism::CognitiveBehavioral, "104")
.with_modality(Modality::VideoCall)
.with_max_wait(30);
println!("Patient queries:");
println!(" Looking for: CBT");
println!(" Location: 104xx");
println!(" Modality: Video");
println!(" Max wait: 30 days\n");
let query_msg = create_query(&patient, &query).unwrap();
// Find matches
let matches = router.query(&query_msg);
println!("Found {} matching therapist(s):", matches.len());
for (i, m) in matches.iter().enumerate() {
if let Ok(data) = meshservice::services::fapp::SlotAnnounce::from_bytes(&m.message.payload) {
println!(" {}. {} in {}xx ({} slots)",
i + 1,
data.display_name.as_deref().unwrap_or("Unknown"),
data.postal_prefix,
data.available_slots
);
if let Some(profile) = &data.profile_url {
println!(" Verify: {}", profile);
}
}
}
println!("\n=== Demo Complete ===");
}
fn hex(bytes: &[u8]) -> String {
bytes.iter().map(|b| format!("{b:02x}")).collect()
}

View File

@@ -0,0 +1,97 @@
//! Housing Service Demo
//!
//! Demonstrates landlord listing and seeker query flow.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::housing::{
amenities, create_announce, create_query, HousingService, ListingAnnounce, ListingQuery,
ListingType,
},
};
fn main() {
println!("=== Housing Service Demo ===\n");
// Create identities
let landlord1 = ServiceIdentity::generate();
let landlord2 = ServiceIdentity::generate();
let seeker = ServiceIdentity::generate();
// Create router with Housing service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(HousingService::relay()));
// Landlord 1: Kreuzberg apartment
let listing1 = ListingAnnounce::new(ListingType::Apartment, 65, 950, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY | amenities::INTERNET)
.with_title("Sunny 2-room in Kreuzberg");
println!("Landlord 1 announces:");
println!(" {} sqm {} in {}xx", listing1.size_sqm, "Apartment", listing1.postal_prefix);
println!(" Rent: {} EUR/month", listing1.rent_euros());
println!(" Rooms: {}", listing1.rooms);
println!(" Amenities: Furnished, Balcony, Internet\n");
let msg1 = create_announce(&landlord1, &listing1, 1).unwrap();
router.handle(msg1, Some(landlord1.public_key())).unwrap();
// Landlord 2: Neukölln shared flat room
let listing2 = ListingAnnounce::new(ListingType::Room, 18, 450, "120")
.with_rooms(1)
.with_amenities(amenities::WASHING_MACHINE | amenities::INTERNET)
.with_title("Room in friendly WG");
println!("Landlord 2 announces:");
println!(" {} sqm {} in {}xx", listing2.size_sqm, "Room", listing2.postal_prefix);
println!(" Rent: {} EUR/month", listing2.rent_euros());
println!(" Amenities: Washing machine, Internet\n");
let msg2 = create_announce(&landlord2, &listing2, 1).unwrap();
router.handle(msg2, Some(landlord2.public_key())).unwrap();
println!("Total listings in store: {}\n", router.store().len());
// Seeker 1: Looking for affordable apartment
println!("--- Seeker Query 1: Affordable apartment ---");
let query1 = ListingQuery::new("10", 800) // Any 10xxx area, max 800 EUR
.with_type(ListingType::Apartment)
.with_min_size(40);
println!(" Area: 10xxx");
println!(" Type: Apartment");
println!(" Max rent: 800 EUR");
println!(" Min size: 40 sqm\n");
let query_msg1 = create_query(&seeker, &query1).unwrap();
let matches1 = router.query(&query_msg1);
println!("Found {} matches:", matches1.len());
for m in &matches1 {
if let Ok(l) = ListingAnnounce::from_bytes(&m.message.payload) {
println!(" - {} ({}xx, {} EUR)", l.title.as_deref().unwrap_or("No title"), l.postal_prefix, l.rent_euros());
}
}
// Seeker 2: Looking for any cheap room
println!("\n--- Seeker Query 2: Any room under 500 EUR ---");
let query2 = ListingQuery::new("1", 500); // Any 1xxxx area
let query_msg2 = create_query(&seeker, &query2).unwrap();
let matches2 = router.query(&query_msg2);
println!("Found {} matches:", matches2.len());
for m in &matches2 {
if let Ok(l) = ListingAnnounce::from_bytes(&m.message.payload) {
println!(" - {} ({}xx, {} sqm, {} EUR)",
l.title.as_deref().unwrap_or("No title"),
l.postal_prefix,
l.size_sqm,
l.rent_euros()
);
}
}
println!("\n=== Demo Complete ===");
}

View File

@@ -0,0 +1,89 @@
//! Multi-Service Demo
//!
//! Shows how multiple services can run on the same mesh router.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
service_ids,
services::{
fapp::{create_announce as fapp_announce, FappService, Modality, SlotAnnounce, Specialism},
housing::{
amenities, create_announce as housing_announce, HousingService, ListingAnnounce,
ListingType,
},
},
verification::{TrustedVerifiers, Verification, VerificationLevel},
};
fn main() {
println!("=== Multi-Service Mesh Demo ===\n");
// Create a router that handles both FAPP and Housing
let mut router = ServiceRouter::new(capabilities::RELAY | capabilities::CONSUMER);
router.register(Box::new(FappService::relay()));
router.register(Box::new(HousingService::relay()));
println!("Registered services:");
for (id, name) in router.services() {
println!(" 0x{:04x} - {}", id, name);
}
println!();
// Create identities
let therapist = ServiceIdentity::generate();
let landlord = ServiceIdentity::generate();
let registry = ServiceIdentity::generate();
// Setup trusted verifiers
let mut verifiers = TrustedVerifiers::new();
verifiers.add(
registry.public_key(),
"Health Registry",
VerificationLevel::RegistryVerified,
);
router.set_trusted_verifiers(verifiers);
// Therapist announcement with verification
println!("--- Adding FAPP announcement ---");
let fapp_data = SlotAnnounce::new(&[Specialism::Psychoanalysis], Modality::InPerson, "104")
.with_profile("https://kbv.de/therapists/12345");
let mut fapp_msg = fapp_announce(&therapist, &fapp_data, 1).unwrap();
// Registry verifies therapist
let verification = Verification::registry(
&registry,
&therapist.address(),
"licensed_therapist",
"KBV-12345",
);
fapp_msg.add_verification(verification);
router.handle(fapp_msg, Some(therapist.public_key())).unwrap();
println!("FAPP announcement stored (with registry verification)\n");
// Housing announcement
println!("--- Adding Housing announcement ---");
let housing_data = ListingAnnounce::new(ListingType::Studio, 35, 700, "104")
.with_amenities(amenities::FURNISHED | amenities::INTERNET)
.with_title("Cozy studio near therapist offices");
let housing_msg = housing_announce(&landlord, &housing_data, 1).unwrap();
router.handle(housing_msg, Some(landlord.public_key())).unwrap();
println!("Housing announcement stored\n");
// Summary
println!("--- Store Summary ---");
println!("FAPP messages: {}", router.store().service_count(service_ids::FAPP));
println!("Housing messages: {}", router.store().service_count(service_ids::HOUSING));
println!("Total messages: {}", router.store().len());
println!("\n=== Multi-Service Demo Complete ===");
println!("\nThe mesh can route and store messages for multiple services");
println!("using a single router instance. Each service has its own:");
println!(" - Payload format");
println!(" - Query matching logic");
println!(" - Handler implementation");
}

View File

@@ -0,0 +1,532 @@
//! Anti-abuse mechanisms for preventing slot blocking and spam.
use std::collections::HashMap;
use std::time::{SystemTime, UNIX_EPOCH};
use sha2::{Digest, Sha256};
/// Rate limiting configuration.
#[derive(Debug, Clone)]
pub struct RateLimits {
/// Max reservations per sender per hour.
pub max_reservations_per_hour: u8,
/// Max pending (unconfirmed) reservations per sender.
pub max_pending_reservations: u8,
/// Min time between reservations (seconds).
pub reservation_cooldown_secs: u32,
/// Max queries per sender per minute.
pub max_queries_per_minute: u8,
}
impl Default for RateLimits {
fn default() -> Self {
Self {
max_reservations_per_hour: 3,
max_pending_reservations: 2,
reservation_cooldown_secs: 300,
max_queries_per_minute: 10,
}
}
}
/// Tracks sender activity for rate limiting.
#[derive(Debug, Default)]
pub struct RateLimiter {
limits: RateLimits,
/// sender_address -> activity
activity: HashMap<[u8; 16], SenderActivity>,
}
#[derive(Debug, Default)]
struct SenderActivity {
/// Timestamps of reservations in last hour.
reservation_times: Vec<u64>,
/// Count of pending reservations.
pending_count: u8,
/// Timestamp of last reservation.
last_reservation: u64,
/// Query timestamps in last minute.
query_times: Vec<u64>,
}
impl RateLimiter {
/// Create with default limits.
pub fn new() -> Self {
Self::default()
}
/// Create with custom limits.
pub fn with_limits(limits: RateLimits) -> Self {
Self {
limits,
activity: HashMap::new(),
}
}
/// Check if a reservation is allowed.
pub fn check_reservation(&mut self, sender: &[u8; 16]) -> RateLimitResult {
let now = now();
let activity = self.activity.entry(*sender).or_default();
// Clean old entries
activity.reservation_times.retain(|&t| now - t < 3600);
// Check cooldown
if now - activity.last_reservation < u64::from(self.limits.reservation_cooldown_secs) {
return RateLimitResult::Cooldown {
wait_secs: self.limits.reservation_cooldown_secs - (now - activity.last_reservation) as u32,
};
}
// Check hourly limit
if activity.reservation_times.len() >= self.limits.max_reservations_per_hour as usize {
return RateLimitResult::HourlyLimitReached;
}
// Check pending limit
if activity.pending_count >= self.limits.max_pending_reservations {
return RateLimitResult::TooManyPending;
}
RateLimitResult::Allowed
}
/// Record a reservation attempt.
pub fn record_reservation(&mut self, sender: &[u8; 16]) {
let now = now();
let activity = self.activity.entry(*sender).or_default();
activity.reservation_times.push(now);
activity.last_reservation = now;
activity.pending_count = activity.pending_count.saturating_add(1);
}
/// Record reservation confirmed/completed (reduce pending).
pub fn record_reservation_resolved(&mut self, sender: &[u8; 16]) {
if let Some(activity) = self.activity.get_mut(sender) {
activity.pending_count = activity.pending_count.saturating_sub(1);
}
}
/// Check if a query is allowed.
pub fn check_query(&mut self, sender: &[u8; 16]) -> RateLimitResult {
let now = now();
let activity = self.activity.entry(*sender).or_default();
// Clean old entries
activity.query_times.retain(|&t| now - t < 60);
if activity.query_times.len() >= self.limits.max_queries_per_minute as usize {
return RateLimitResult::QueryLimitReached;
}
RateLimitResult::Allowed
}
/// Record a query.
pub fn record_query(&mut self, sender: &[u8; 16]) {
let now = now();
let activity = self.activity.entry(*sender).or_default();
activity.query_times.push(now);
}
/// Prune old activity data.
pub fn prune(&mut self) {
let now = now();
self.activity.retain(|_, a| {
a.reservation_times.retain(|&t| now - t < 3600);
a.query_times.retain(|&t| now - t < 60);
!a.reservation_times.is_empty() || !a.query_times.is_empty() || a.pending_count > 0
});
}
}
/// Result of rate limit check.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum RateLimitResult {
/// Request allowed.
Allowed,
/// Must wait before next reservation.
Cooldown { wait_secs: u32 },
/// Hourly reservation limit reached.
HourlyLimitReached,
/// Too many pending reservations.
TooManyPending,
/// Query rate limit reached.
QueryLimitReached,
}
impl RateLimitResult {
pub fn is_allowed(&self) -> bool {
matches!(self, RateLimitResult::Allowed)
}
}
/// Proof-of-work for reservation requests.
#[derive(Debug, Clone)]
pub struct ProofOfWork {
/// Nonce that produces valid hash.
pub nonce: u64,
/// Required difficulty (leading zero bits).
pub difficulty: u8,
}
impl ProofOfWork {
/// Default difficulty (20 bits ≈ 1-2 seconds on modern CPU).
pub const DEFAULT_DIFFICULTY: u8 = 20;
/// Generate proof-of-work for a reservation.
pub fn generate(reservation_id: &[u8; 16], difficulty: u8) -> Self {
let mut nonce = 0u64;
loop {
if Self::check_hash(reservation_id, nonce, difficulty) {
return Self { nonce, difficulty };
}
nonce = nonce.wrapping_add(1);
}
}
/// Verify proof-of-work.
pub fn verify(&self, reservation_id: &[u8; 16]) -> bool {
Self::check_hash(reservation_id, self.nonce, self.difficulty)
}
fn check_hash(reservation_id: &[u8; 16], nonce: u64, difficulty: u8) -> bool {
let mut hasher = Sha256::new();
hasher.update(reservation_id);
hasher.update(&nonce.to_le_bytes());
let hash = hasher.finalize();
leading_zero_bits(&hash) >= difficulty
}
}
/// Count leading zero bits in a byte slice.
fn leading_zero_bits(data: &[u8]) -> u8 {
let mut count = 0u8;
for byte in data {
if *byte == 0 {
count += 8;
} else {
count += byte.leading_zeros() as u8;
break;
}
}
count
}
/// Sender reputation tracking.
#[derive(Debug, Clone, Default)]
pub struct SenderReputation {
pub address: [u8; 16],
pub reservations_made: u32,
pub reservations_honored: u32,
pub reservations_cancelled: u32,
pub no_shows: u32,
pub last_no_show: Option<u64>,
}
impl SenderReputation {
/// Create for a new sender.
pub fn new(address: [u8; 16]) -> Self {
Self {
address,
..Default::default()
}
}
/// Calculate honor rate (0.0 to 1.0).
pub fn honor_rate(&self) -> f32 {
if self.reservations_made == 0 {
return 0.5; // Neutral for new users
}
(self.reservations_honored as f32) / (self.reservations_made as f32)
}
/// Check if sender should be blocked.
pub fn is_blocked(&self) -> bool {
self.no_shows >= 3 || (self.reservations_made >= 5 && self.honor_rate() < 0.5)
}
/// Record a completed reservation.
pub fn record_honored(&mut self) {
self.reservations_made += 1;
self.reservations_honored += 1;
}
/// Record a cancelled reservation (with notice).
pub fn record_cancelled(&mut self) {
self.reservations_made += 1;
self.reservations_cancelled += 1;
}
/// Record a no-show.
pub fn record_no_show(&mut self) {
self.reservations_made += 1;
self.no_shows += 1;
self.last_no_show = Some(now());
}
}
/// Reputation store.
#[derive(Debug, Default)]
pub struct ReputationStore {
reputations: HashMap<[u8; 16], SenderReputation>,
}
impl ReputationStore {
pub fn new() -> Self {
Self::default()
}
/// Get or create reputation for a sender.
pub fn get_or_create(&mut self, address: [u8; 16]) -> &mut SenderReputation {
self.reputations
.entry(address)
.or_insert_with(|| SenderReputation::new(address))
}
/// Get reputation (read-only).
pub fn get(&self, address: &[u8; 16]) -> Option<&SenderReputation> {
self.reputations.get(address)
}
/// Check if sender is blocked.
pub fn is_blocked(&self, address: &[u8; 16]) -> bool {
self.reputations
.get(address)
.map(|r| r.is_blocked())
.unwrap_or(false)
}
/// Get honor rate (0.5 for unknown).
pub fn honor_rate(&self, address: &[u8; 16]) -> f32 {
self.reputations
.get(address)
.map(|r| r.honor_rate())
.unwrap_or(0.5)
}
}
/// Blocklist entry.
#[derive(Debug, Clone)]
pub struct BlocklistEntry {
pub blocked_address: [u8; 16],
pub reason: BlockReason,
pub reported_by: [u8; 16],
pub signature: Vec<u8>,
pub timestamp: u64,
}
/// Reason for blocking.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum BlockReason {
NoShow = 1,
Spam = 2,
Harassment = 3,
FakeIdentity = 4,
}
/// Therapist-defined reservation policy.
#[derive(Debug, Clone)]
pub struct TherapistPolicy {
/// Max pending reservations from new senders.
pub max_pending_new: u8,
/// Max pending from established senders.
pub max_pending_established: u8,
/// Require this verification level for reservations.
pub min_verification_level: u8,
/// Auto-reject senders with honor rate below this.
pub min_honor_rate: f32,
/// Require proof-of-work.
pub require_pow: bool,
/// PoW difficulty (if required).
pub pow_difficulty: u8,
}
impl Default for TherapistPolicy {
fn default() -> Self {
Self {
max_pending_new: 1,
max_pending_established: 3,
min_verification_level: 0,
min_honor_rate: 0.5,
require_pow: true,
pow_difficulty: ProofOfWork::DEFAULT_DIFFICULTY,
}
}
}
impl TherapistPolicy {
/// Check if a reservation request meets policy.
pub fn check(
&self,
sender_reputation: &SenderReputation,
sender_verification_level: u8,
pow: Option<&ProofOfWork>,
reservation_id: &[u8; 16],
) -> PolicyResult {
// Check verification level
if sender_verification_level < self.min_verification_level {
return PolicyResult::InsufficientVerification;
}
// Check honor rate
if sender_reputation.honor_rate() < self.min_honor_rate {
return PolicyResult::LowReputation;
}
// Check blocked
if sender_reputation.is_blocked() {
return PolicyResult::Blocked;
}
// Check proof-of-work
if self.require_pow {
match pow {
Some(p) if p.difficulty >= self.pow_difficulty && p.verify(reservation_id) => {}
Some(_) => return PolicyResult::InvalidPoW,
None => return PolicyResult::MissingPoW,
}
}
PolicyResult::Allowed
}
}
/// Result of policy check.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum PolicyResult {
Allowed,
InsufficientVerification,
LowReputation,
Blocked,
MissingPoW,
InvalidPoW,
}
impl PolicyResult {
pub fn is_allowed(&self) -> bool {
matches!(self, PolicyResult::Allowed)
}
}
fn now() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn rate_limiter_allows_first_reservation() {
let mut limiter = RateLimiter::new();
let sender = [1u8; 16];
assert!(limiter.check_reservation(&sender).is_allowed());
}
#[test]
fn rate_limiter_enforces_cooldown() {
let mut limiter = RateLimiter::with_limits(RateLimits {
reservation_cooldown_secs: 300,
..Default::default()
});
let sender = [2u8; 16];
limiter.record_reservation(&sender);
let result = limiter.check_reservation(&sender);
assert!(matches!(result, RateLimitResult::Cooldown { .. }));
}
#[test]
fn rate_limiter_enforces_hourly_limit() {
let mut limiter = RateLimiter::with_limits(RateLimits {
max_reservations_per_hour: 2,
reservation_cooldown_secs: 0,
..Default::default()
});
let sender = [3u8; 16];
limiter.record_reservation(&sender);
limiter.record_reservation(&sender);
assert_eq!(limiter.check_reservation(&sender), RateLimitResult::HourlyLimitReached);
}
#[test]
fn pow_generation_and_verification() {
let reservation_id = [42u8; 16];
let pow = ProofOfWork::generate(&reservation_id, 8); // Low difficulty for test
assert!(pow.verify(&reservation_id));
assert!(!pow.verify(&[0u8; 16])); // Wrong ID
}
#[test]
fn reputation_tracking() {
let mut rep = SenderReputation::new([5u8; 16]);
rep.record_honored();
rep.record_honored();
rep.record_no_show();
assert_eq!(rep.reservations_made, 3);
assert_eq!(rep.honor_rate(), 2.0 / 3.0);
assert!(!rep.is_blocked());
rep.record_no_show();
rep.record_no_show();
assert!(rep.is_blocked()); // 3 no-shows
}
#[test]
fn policy_check_pow() {
let policy = TherapistPolicy {
require_pow: true,
pow_difficulty: 8,
..Default::default()
};
let rep = SenderReputation::new([6u8; 16]);
let reservation_id = [7u8; 16];
// No PoW
assert_eq!(
policy.check(&rep, 0, None, &reservation_id),
PolicyResult::MissingPoW
);
// Valid PoW
let pow = ProofOfWork::generate(&reservation_id, 8);
assert_eq!(
policy.check(&rep, 0, Some(&pow), &reservation_id),
PolicyResult::Allowed
);
}
#[test]
fn policy_check_verification_level() {
let policy = TherapistPolicy {
min_verification_level: 2,
require_pow: false,
..Default::default()
};
let rep = SenderReputation::new([8u8; 16]);
let reservation_id = [9u8; 16];
assert_eq!(
policy.check(&rep, 1, None, &reservation_id),
PolicyResult::InsufficientVerification
);
assert_eq!(
policy.check(&rep, 2, None, &reservation_id),
PolicyResult::Allowed
);
}
}

View File

@@ -0,0 +1,392 @@
//! End-to-end encryption for service message payloads.
//!
//! Uses X25519 key agreement + HKDF-SHA256 key derivation + ChaCha20-Poly1305 AEAD.
//! Encryption is opt-in per message: the sender encrypts the payload before
//! constructing the `ServiceMessage`, and the recipient decrypts after receiving.
//!
//! ## Key model
//!
//! Each `ServiceIdentity` (Ed25519) can derive an X25519 keypair for encryption.
//! - Sender generates an ephemeral X25519 key per message (forward secrecy).
//! - Shared secret is computed via X25519 DH with the recipient's public key.
//! - HKDF derives a per-message encryption key.
//! - ChaCha20-Poly1305 encrypts the payload with a random nonce.
//!
//! ## Wire format of encrypted payload
//!
//! ```text
//! [1 byte: version = 0x01]
//! [32 bytes: sender ephemeral X25519 public key]
//! [12 bytes: nonce]
//! [N bytes: ciphertext + 16-byte Poly1305 tag]
//! ```
use chacha20poly1305::aead::{Aead, KeyInit};
use chacha20poly1305::{ChaCha20Poly1305, Nonce};
use hkdf::Hkdf;
use rand::rngs::OsRng;
use rand::RngCore;
use x25519_dalek::{PublicKey as X25519Public, StaticSecret};
use crate::error::ServiceError;
use crate::identity::ServiceIdentity;
/// Current encrypted payload version byte.
const ENCRYPTED_VERSION: u8 = 0x01;
/// Overhead: 1 (version) + 32 (ephemeral pubkey) + 12 (nonce) + 16 (tag).
const ENCRYPTION_OVERHEAD: usize = 1 + 32 + 12 + 16;
/// X25519 keypair derived from a `ServiceIdentity` for encryption.
///
/// The Ed25519 seed is reused as the X25519 static secret. This is the
/// standard Ed25519-to-X25519 conversion used by libsodium and others.
pub struct EncryptionKeyPair {
secret: StaticSecret,
public: X25519Public,
}
impl EncryptionKeyPair {
/// Derive an encryption keypair from a `ServiceIdentity`.
pub fn from_identity(identity: &ServiceIdentity) -> Self {
let secret = StaticSecret::from(identity.secret_key());
let public = X25519Public::from(&secret);
Self { secret, public }
}
/// Get the X25519 public key bytes (advertise to peers for encryption).
pub fn public_bytes(&self) -> [u8; 32] {
self.public.to_bytes()
}
/// Encrypt a plaintext payload for a specific recipient.
///
/// Uses a fresh ephemeral key for forward secrecy: even if the sender's
/// long-term key is compromised, past messages remain confidential.
pub fn encrypt_for(
&self,
recipient_x25519_public: &[u8; 32],
plaintext: &[u8],
) -> Result<Vec<u8>, ServiceError> {
// Generate ephemeral keypair for this message
let eph_secret = StaticSecret::random_from_rng(OsRng);
let eph_public = X25519Public::from(&eph_secret);
// X25519 DH with recipient
let recipient_pub = X25519Public::from(*recipient_x25519_public);
let shared = eph_secret.diffie_hellman(&recipient_pub);
// Derive encryption key via HKDF
let key = derive_key(shared.as_bytes(), b"meshservice-e2e-v1");
// Encrypt with ChaCha20-Poly1305
let cipher = ChaCha20Poly1305::new((&key).into());
let mut nonce_bytes = [0u8; 12];
OsRng.fill_bytes(&mut nonce_bytes);
let nonce = Nonce::from_slice(&nonce_bytes);
let ciphertext = cipher
.encrypt(nonce, plaintext)
.map_err(|_| ServiceError::Crypto("encryption failed".into()))?;
// Assemble: version || ephemeral_public || nonce || ciphertext+tag
let mut out = Vec::with_capacity(ENCRYPTION_OVERHEAD + plaintext.len());
out.push(ENCRYPTED_VERSION);
out.extend_from_slice(&eph_public.to_bytes());
out.extend_from_slice(&nonce_bytes);
out.extend_from_slice(&ciphertext);
Ok(out)
}
/// Decrypt an encrypted payload sent to us.
///
/// Extracts the sender's ephemeral public key from the payload, computes
/// the shared secret with our static X25519 key, and decrypts.
pub fn decrypt(&self, encrypted: &[u8]) -> Result<Vec<u8>, ServiceError> {
if encrypted.len() < ENCRYPTION_OVERHEAD {
return Err(ServiceError::Crypto("ciphertext too short".into()));
}
let version = encrypted[0];
if version != ENCRYPTED_VERSION {
return Err(ServiceError::Crypto(format!(
"unsupported encryption version: {version}"
)));
}
let eph_public_bytes: [u8; 32] = encrypted[1..33]
.try_into()
.map_err(|_| ServiceError::Crypto("invalid ephemeral key".into()))?;
let nonce_bytes: [u8; 12] = encrypted[33..45]
.try_into()
.map_err(|_| ServiceError::Crypto("invalid nonce".into()))?;
let ciphertext = &encrypted[45..];
// X25519 DH with sender's ephemeral key
let eph_public = X25519Public::from(eph_public_bytes);
let shared = self.secret.diffie_hellman(&eph_public);
// Derive decryption key
let key = derive_key(shared.as_bytes(), b"meshservice-e2e-v1");
// Decrypt
let cipher = ChaCha20Poly1305::new((&key).into());
let nonce = Nonce::from_slice(&nonce_bytes);
cipher
.decrypt(nonce, ciphertext)
.map_err(|_| ServiceError::Crypto("decryption failed".into()))
}
}
/// Derive a 32-byte key from a shared secret using HKDF-SHA256.
fn derive_key(shared_secret: &[u8], info: &[u8]) -> [u8; 32] {
let hk = Hkdf::<sha2::Sha256>::new(None, shared_secret);
let mut key = [0u8; 32];
hk.expand(info, &mut key)
.expect("HKDF expand to 32 bytes should never fail");
key
}
/// Check whether a payload appears to be encrypted (starts with version byte
/// and has minimum length).
pub fn is_encrypted_payload(payload: &[u8]) -> bool {
payload.len() >= ENCRYPTION_OVERHEAD && payload[0] == ENCRYPTED_VERSION
}
/// Return the encryption overhead in bytes (useful for size budgets on
/// constrained transports like LoRa).
pub const fn encryption_overhead() -> usize {
ENCRYPTION_OVERHEAD
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn encrypt_decrypt_roundtrip() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"Hello, encrypted mesh world!";
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
let decrypted = recipient_keys.decrypt(&encrypted).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn wrong_recipient_cannot_decrypt() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let wrong_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let wrong_keys = EncryptionKeyPair::from_identity(&wrong_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"secret data")
.expect("encrypt");
let result = wrong_keys.decrypt(&encrypted);
assert!(result.is_err());
}
#[test]
fn tampered_ciphertext_fails() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let mut encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"do not tamper")
.expect("encrypt");
// Flip a byte in the ciphertext portion
let last = encrypted.len() - 1;
encrypted[last] ^= 0xff;
let result = recipient_keys.decrypt(&encrypted);
assert!(result.is_err());
}
#[test]
fn truncated_ciphertext_rejected() {
let recipient_id = ServiceIdentity::generate();
let keys = EncryptionKeyPair::from_identity(&recipient_id);
let result = keys.decrypt(&[0x01; 10]);
assert!(result.is_err());
}
#[test]
fn bad_version_rejected() {
let recipient_id = ServiceIdentity::generate();
let keys = EncryptionKeyPair::from_identity(&recipient_id);
// Valid length but wrong version
let mut fake = vec![0x99u8; ENCRYPTION_OVERHEAD + 10];
fake[0] = 0x99;
let result = keys.decrypt(&fake);
assert!(result.is_err());
}
#[test]
fn each_encryption_produces_different_ciphertext() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"same message twice";
let enc1 = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt 1");
let enc2 = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt 2");
// Different ephemeral keys + nonces => different ciphertext
assert_ne!(enc1, enc2);
// Both decrypt to the same plaintext
let dec1 = recipient_keys.decrypt(&enc1).expect("decrypt 1");
let dec2 = recipient_keys.decrypt(&enc2).expect("decrypt 2");
assert_eq!(dec1, plaintext);
assert_eq!(dec2, plaintext);
}
#[test]
fn empty_plaintext_roundtrip() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"")
.expect("encrypt empty");
assert_eq!(encrypted.len(), ENCRYPTION_OVERHEAD);
let decrypted = recipient_keys.decrypt(&encrypted).expect("decrypt empty");
assert!(decrypted.is_empty());
}
#[test]
fn is_encrypted_payload_detection() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"test")
.expect("encrypt");
assert!(is_encrypted_payload(&encrypted));
assert!(!is_encrypted_payload(b"plain text"));
assert!(!is_encrypted_payload(&[]));
}
#[test]
fn public_bytes_deterministic() {
let id = ServiceIdentity::generate();
let keys1 = EncryptionKeyPair::from_identity(&id);
let keys2 = EncryptionKeyPair::from_identity(&id);
assert_eq!(keys1.public_bytes(), keys2.public_bytes());
}
#[test]
fn encrypt_decrypt_with_service_message() {
// Full integration: encrypt payload, wrap in ServiceMessage, decrypt
use crate::message::ServiceMessage;
use crate::service_ids::FAPP;
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
// Encrypt the payload before creating the message
let plaintext = b"confidential appointment details";
let encrypted_payload = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
// Create a signed service message with the encrypted payload
let msg = ServiceMessage::new(
&sender_id,
FAPP,
crate::message::MessageType::Reserve,
encrypted_payload.clone(),
1,
);
// Verify the message signature still works (signs over encrypted payload)
assert!(msg.verify(&sender_id.public_key()));
// Recipient decrypts the payload
let decrypted = recipient_keys.decrypt(&msg.payload).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn encrypt_decrypt_wire_roundtrip() {
// Full wire roundtrip: encrypt -> sign -> encode -> decode -> verify -> decrypt
use crate::message::ServiceMessage;
use crate::service_ids::FAPP;
use crate::wire;
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"sensitive medical data over the mesh";
let encrypted_payload = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
let msg = ServiceMessage::new(
&sender_id,
FAPP,
crate::message::MessageType::Reserve,
encrypted_payload,
42,
);
// Encode to wire format
let wire_bytes = wire::encode(&msg).expect("encode");
// Decode from wire format
let decoded = wire::decode(&wire_bytes).expect("decode");
// Verify signature
assert!(decoded.verify(&sender_id.public_key()));
// Decrypt payload
let decrypted = recipient_keys.decrypt(&decoded.payload).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn encryption_overhead_constant() {
assert_eq!(encryption_overhead(), 61);
}
}

View File

@@ -0,0 +1,55 @@
//! Error types for the mesh service layer.
use thiserror::Error;
/// Errors that can occur in the service layer.
#[derive(Debug, Error)]
pub enum ServiceError {
#[error("invalid message format: {0}")]
InvalidFormat(String),
#[error("unknown service ID: {0}")]
UnknownService(u32),
#[error("signature verification failed")]
SignatureInvalid,
#[error("message expired")]
Expired,
#[error("max hops exceeded")]
MaxHopsExceeded,
#[error("missing capability: {0}")]
MissingCapability(String),
#[error("store full")]
StoreFull,
#[error("duplicate message")]
Duplicate,
#[error("serialization error: {0}")]
Serialization(String),
#[error("crypto error: {0}")]
Crypto(String),
#[error("verification required: minimum level {0}")]
VerificationRequired(u8),
#[error("service handler error: {0}")]
Handler(String),
}
impl From<ciborium::ser::Error<std::io::Error>> for ServiceError {
fn from(e: ciborium::ser::Error<std::io::Error>) -> Self {
ServiceError::Serialization(e.to_string())
}
}
impl From<ciborium::de::Error<std::io::Error>> for ServiceError {
fn from(e: ciborium::de::Error<std::io::Error>) -> Self {
ServiceError::Serialization(e.to_string())
}
}

View File

@@ -0,0 +1,119 @@
//! Service identity management using Ed25519.
use ed25519_dalek::{Signature, Signer, SigningKey, Verifier, VerifyingKey};
use rand::rngs::OsRng;
use sha2::{Digest, Sha256};
/// A service participant's identity (Ed25519 keypair).
#[derive(Clone)]
pub struct ServiceIdentity {
signing_key: SigningKey,
}
impl ServiceIdentity {
/// Generate a new random identity.
pub fn generate() -> Self {
use rand::RngCore;
let mut secret = [0u8; 32];
OsRng.fill_bytes(&mut secret);
let signing_key = SigningKey::from_bytes(&secret);
Self { signing_key }
}
/// Create from an existing secret key.
pub fn from_secret(secret: &[u8; 32]) -> Self {
let signing_key = SigningKey::from_bytes(secret);
Self { signing_key }
}
/// Get the 32-byte public key.
pub fn public_key(&self) -> [u8; 32] {
self.signing_key.verifying_key().to_bytes()
}
/// Get the 32-byte secret key (for persistence).
pub fn secret_key(&self) -> [u8; 32] {
self.signing_key.to_bytes()
}
/// Compute the 16-byte mesh address from the public key.
pub fn address(&self) -> [u8; 16] {
compute_address(&self.public_key())
}
/// Sign a message.
pub fn sign(&self, message: &[u8]) -> [u8; 64] {
let sig = self.signing_key.sign(message);
sig.to_bytes()
}
/// Verify a signature against a public key.
pub fn verify(public_key: &[u8; 32], message: &[u8], signature: &[u8; 64]) -> bool {
let Ok(verifying_key) = VerifyingKey::from_bytes(public_key) else {
return false;
};
let sig = Signature::from_bytes(signature);
verifying_key.verify(message, &sig).is_ok()
}
}
/// Compute a 16-byte mesh address from a 32-byte public key.
///
/// Address = SHA-256(public_key)[0..16]
pub fn compute_address(public_key: &[u8; 32]) -> [u8; 16] {
let hash = Sha256::digest(public_key);
let mut addr = [0u8; 16];
addr.copy_from_slice(&hash[..16]);
addr
}
impl std::fmt::Debug for ServiceIdentity {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ServiceIdentity")
.field("address", &hex::encode(self.address()))
.finish()
}
}
// Hex encoding for debug output
mod hex {
pub fn encode(bytes: impl AsRef<[u8]>) -> String {
bytes.as_ref().iter().map(|b| format!("{b:02x}")).collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn generate_and_sign() {
let id = ServiceIdentity::generate();
let msg = b"hello world";
let sig = id.sign(msg);
assert!(ServiceIdentity::verify(&id.public_key(), msg, &sig));
}
#[test]
fn address_is_deterministic() {
let id = ServiceIdentity::generate();
let addr1 = id.address();
let addr2 = compute_address(&id.public_key());
assert_eq!(addr1, addr2);
}
#[test]
fn wrong_message_fails() {
let id = ServiceIdentity::generate();
let sig = id.sign(b"correct");
assert!(!ServiceIdentity::verify(&id.public_key(), b"wrong", &sig));
}
#[test]
fn roundtrip_secret() {
let id = ServiceIdentity::generate();
let secret = id.secret_key();
let restored = ServiceIdentity::from_secret(&secret);
assert_eq!(id.public_key(), restored.public_key());
}
}

View File

@@ -0,0 +1,90 @@
//! # MeshService — Generic Decentralized Service Layer
//!
//! A protocol and runtime for building decentralized services on mesh networks.
//! Any service following the Announce → Query → Response → Reserve pattern
//! can be implemented on this layer.
//!
//! ## Architecture
//!
//! ```text
//! ┌─────────────────────────────────────────────────────────────┐
//! │ Application Services │
//! │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
//! │ │ FAPP │ │ Housing │ │ Repair │ │ Custom │ ... │
//! │ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
//! │ └────────────┴────────────┴────────────┘ │
//! │ Service Layer (this crate) │
//! │ ServiceMessage, ServiceRouter, Verification │
//! │ ─────────────────────────────────────────────────────── │
//! │ Mesh Layer │
//! │ (provided by quicprochat-p2p or other mesh impl) │
//! └─────────────────────────────────────────────────────────────┘
//! ```
//!
//! ## Quick Start
//!
//! ```rust,ignore
//! use meshservice::{ServiceRouter, ServiceMessage, services::fapp::FappService};
//!
//! // Create router
//! let mut router = ServiceRouter::new(identity, capabilities);
//!
//! // Register services
//! router.register(FappService::new());
//! router.register(HousingService::new());
//!
//! // Handle incoming message
//! let action = router.handle(&incoming_bytes);
//! ```
pub mod identity;
pub mod message;
pub mod router;
pub mod store;
pub mod verification;
pub mod services;
pub mod wire;
pub mod error;
pub mod anti_abuse;
pub mod crypto;
pub use identity::ServiceIdentity;
pub use message::{ServiceMessage, MessageType};
pub use router::{ServiceRouter, ServiceHandler, ServiceAction};
pub use store::ServiceStore;
pub use verification::{Verification, VerificationLevel};
pub use error::ServiceError;
pub use anti_abuse::{RateLimiter, RateLimits, ProofOfWork, SenderReputation, TherapistPolicy};
pub use crypto::{EncryptionKeyPair, is_encrypted_payload, encryption_overhead};
/// Well-known service IDs.
pub mod service_ids {
/// Free Appointment Propagation Protocol (psychotherapy).
pub const FAPP: u32 = 0x0001;
/// Housing / room sharing.
pub const HOUSING: u32 = 0x0002;
/// Repair services / craftsmen.
pub const REPAIR: u32 = 0x0003;
/// Tutoring / education.
pub const TUTOR: u32 = 0x0004;
/// Medical appointments.
pub const MEDICAL: u32 = 0x0005;
/// Legal consultation.
pub const LEGAL: u32 = 0x0006;
/// Volunteer coordination.
pub const VOLUNTEER: u32 = 0x0007;
/// Events / tickets.
pub const EVENTS: u32 = 0x0008;
/// Reserved for user-defined services.
pub const CUSTOM_START: u32 = 0x8000;
}
/// Capability flags for service participation.
pub mod capabilities {
/// Node can announce/provide services.
pub const PROVIDER: u16 = 0x0100;
/// Node caches and relays service messages.
pub const RELAY: u16 = 0x0200;
/// Node can query/consume services.
pub const CONSUMER: u16 = 0x0400;
}

View File

@@ -0,0 +1,321 @@
//! Core message types for the service layer.
use std::time::{SystemTime, UNIX_EPOCH};
use serde::{Deserialize, Serialize};
use crate::identity::ServiceIdentity;
use crate::verification::Verification;
/// Message types within a service.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum MessageType {
/// Provider announces availability.
Announce = 0x01,
/// Consumer queries for matches.
Query = 0x02,
/// Response to a query.
Response = 0x03,
/// Consumer reserves a slot/item.
Reserve = 0x04,
/// Provider confirms/rejects reservation.
Confirm = 0x05,
/// Either party cancels.
Cancel = 0x06,
/// Provider updates an existing announce (partial).
Update = 0x07,
/// Provider revokes an announce.
Revoke = 0x08,
}
impl TryFrom<u8> for MessageType {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(MessageType::Announce),
0x02 => Ok(MessageType::Query),
0x03 => Ok(MessageType::Response),
0x04 => Ok(MessageType::Reserve),
0x05 => Ok(MessageType::Confirm),
0x06 => Ok(MessageType::Cancel),
0x07 => Ok(MessageType::Update),
0x08 => Ok(MessageType::Revoke),
_ => Err(()),
}
}
}
/// A generic service message that can carry any application payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServiceMessage {
/// Service identifier (which application).
pub service_id: u32,
/// Message type within service.
pub message_type: MessageType,
/// Protocol version for forward compatibility.
pub version: u8,
/// Unique message ID.
pub id: [u8; 16],
/// Sender's mesh address.
pub sender_address: [u8; 16],
/// Application-specific CBOR payload.
pub payload: Vec<u8>,
/// Ed25519 signature over signable fields.
pub signature: Vec<u8>,
/// Optional verifications from trusted parties.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub verifications: Vec<Verification>,
/// Monotonically increasing per sender (dedup/supersede).
pub sequence: u64,
/// Time-to-live in hours.
pub ttl_hours: u16,
/// Unix timestamp of creation.
pub timestamp: u64,
/// Current hop count (incremented on re-broadcast).
pub hop_count: u8,
/// Maximum propagation hops.
pub max_hops: u8,
}
/// Default TTL: 7 days.
const DEFAULT_TTL_HOURS: u16 = 168;
/// Default max hops.
const DEFAULT_MAX_HOPS: u8 = 8;
impl ServiceMessage {
/// Create a new service message.
pub fn new(
identity: &ServiceIdentity,
service_id: u32,
message_type: MessageType,
payload: Vec<u8>,
sequence: u64,
) -> Self {
Self::with_options(
identity,
service_id,
message_type,
payload,
sequence,
DEFAULT_TTL_HOURS,
DEFAULT_MAX_HOPS,
)
}
/// Create with custom TTL and max hops.
pub fn with_options(
identity: &ServiceIdentity,
service_id: u32,
message_type: MessageType,
payload: Vec<u8>,
sequence: u64,
ttl_hours: u16,
max_hops: u8,
) -> Self {
use sha2::{Digest, Sha256};
let sender_address = identity.address();
// Generate unique ID from address + sequence
let id_hash = Sha256::digest(
[&sender_address[..], &sequence.to_le_bytes()].concat()
);
let mut id = [0u8; 16];
id.copy_from_slice(&id_hash[..16]);
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let mut msg = Self {
service_id,
message_type,
version: 1,
id,
sender_address,
payload,
signature: Vec::new(),
verifications: Vec::new(),
sequence,
ttl_hours,
timestamp,
hop_count: 0,
max_hops,
};
let signable = msg.signable_bytes();
msg.signature = identity.sign(&signable).to_vec();
msg
}
/// Create an announce message.
pub fn announce(
identity: &ServiceIdentity,
service_id: u32,
payload: Vec<u8>,
sequence: u64,
) -> Self {
Self::new(identity, service_id, MessageType::Announce, payload, sequence)
}
/// Create a query message.
pub fn query(
identity: &ServiceIdentity,
service_id: u32,
payload: Vec<u8>,
) -> Self {
// Queries use random sequence (not monotonic)
let sequence = rand::random();
Self::with_options(
identity,
service_id,
MessageType::Query,
payload,
sequence,
1, // 1 hour TTL for queries
DEFAULT_MAX_HOPS,
)
}
/// Create a response message.
pub fn response(
identity: &ServiceIdentity,
service_id: u32,
query_id: [u8; 16],
payload: Vec<u8>,
) -> Self {
let mut msg = Self::new(
identity,
service_id,
MessageType::Response,
payload,
rand::random(),
);
// Response ID matches query ID for correlation
msg.id = query_id;
msg
}
/// Assemble bytes for signing/verification.
/// Excludes signature, hop_count, verifications (mutable fields).
fn signable_bytes(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(256);
buf.extend_from_slice(&self.service_id.to_le_bytes());
buf.push(self.message_type as u8);
buf.push(self.version);
buf.extend_from_slice(&self.id);
buf.extend_from_slice(&self.sender_address);
buf.extend_from_slice(&(self.payload.len() as u32).to_le_bytes());
buf.extend_from_slice(&self.payload);
buf.extend_from_slice(&self.sequence.to_le_bytes());
buf.extend_from_slice(&self.ttl_hours.to_le_bytes());
buf.extend_from_slice(&self.timestamp.to_le_bytes());
buf.push(self.max_hops);
buf
}
/// Verify the signature using the sender's public key.
pub fn verify(&self, sender_public_key: &[u8; 32]) -> bool {
use crate::identity::compute_address;
// Verify address matches key
if compute_address(sender_public_key) != self.sender_address {
return false;
}
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = self.signable_bytes();
ServiceIdentity::verify(sender_public_key, &signable, &sig)
}
/// Check if the message has expired.
pub fn is_expired(&self) -> bool {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let ttl_secs = u64::from(self.ttl_hours) * 3600;
now.saturating_sub(self.timestamp) > ttl_secs
}
/// Check if the message can still propagate.
pub fn can_propagate(&self) -> bool {
self.hop_count < self.max_hops && !self.is_expired()
}
/// Create a forwarded copy with incremented hop count.
pub fn forwarded(&self) -> Self {
let mut copy = self.clone();
copy.hop_count = copy.hop_count.saturating_add(1);
copy
}
/// Get the highest verification level attached.
pub fn verification_level(&self) -> u8 {
self.verifications
.iter()
.map(|v| v.level)
.max()
.unwrap_or(0)
}
/// Add a verification to the message.
pub fn add_verification(&mut self, verification: Verification) {
self.verifications.push(verification);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn create_and_verify() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(
&id,
crate::service_ids::FAPP,
b"test payload".to_vec(),
1,
);
assert!(msg.verify(&id.public_key()));
assert!(!msg.is_expired());
assert!(msg.can_propagate());
assert_eq!(msg.hop_count, 0);
}
#[test]
fn forwarded_increments_hop() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, 1, vec![], 1);
let fwd = msg.forwarded();
assert_eq!(fwd.hop_count, 1);
assert!(fwd.verify(&id.public_key())); // Still valid
}
#[test]
fn tampered_fails_verify() {
let id = ServiceIdentity::generate();
let mut msg = ServiceMessage::announce(&id, 1, b"original".to_vec(), 1);
msg.payload = b"tampered".to_vec();
assert!(!msg.verify(&id.public_key()));
}
#[test]
fn query_has_short_ttl() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::query(&id, 1, vec![]);
assert_eq!(msg.ttl_hours, 1);
}
}

View File

@@ -0,0 +1,289 @@
//! Service router dispatches messages to service-specific handlers.
use std::collections::HashMap;
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::store::{ServiceStore, StoredMessage};
use crate::verification::TrustedVerifiers;
/// Action returned by a service handler.
#[derive(Debug)]
pub enum ServiceAction {
/// Message handled, do nothing more.
Handled,
/// Store the message locally.
Store,
/// Store and forward to peers.
StoreAndForward,
/// Forward without storing (pass-through relay).
ForwardOnly,
/// Drop the message silently.
Drop,
/// Send a response back.
Respond(ServiceMessage),
/// Reject with error.
Reject(ServiceError),
}
/// Trait for service-specific handlers.
pub trait ServiceHandler: Send + Sync {
/// The service ID this handler manages.
fn service_id(&self) -> u32;
/// Human-readable service name.
fn name(&self) -> &str;
/// Handle an incoming message.
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError>;
/// Validate a message payload (service-specific logic).
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
// Default: accept all
let _ = message;
Ok(())
}
/// Check if a message matches a query.
fn matches_query(&self, announce: &StoredMessage, query: &ServiceMessage) -> bool;
}
/// Context passed to handlers.
pub struct HandlerContext<'a> {
/// Current node's capabilities.
pub capabilities: u16,
/// The store (for lookups during handle).
pub store: &'a ServiceStore,
/// Trusted verifiers for checking.
pub trusted_verifiers: &'a TrustedVerifiers,
/// Sender's public key (if known).
pub sender_public_key: Option<[u8; 32]>,
}
/// Routes messages to appropriate service handlers.
pub struct ServiceRouter {
/// Service ID -> Handler.
handlers: HashMap<u32, Box<dyn ServiceHandler>>,
/// Shared message store.
store: ServiceStore,
/// Node capabilities.
capabilities: u16,
/// Trusted verifiers.
trusted_verifiers: TrustedVerifiers,
/// Minimum verification level to accept announces (0 = any).
min_verification_level: u8,
}
impl ServiceRouter {
/// Create a new router.
pub fn new(capabilities: u16) -> Self {
Self {
handlers: HashMap::new(),
store: ServiceStore::new(),
capabilities,
trusted_verifiers: TrustedVerifiers::new(),
min_verification_level: 0,
}
}
/// Register a service handler.
pub fn register(&mut self, handler: Box<dyn ServiceHandler>) {
let id = handler.service_id();
self.handlers.insert(id, handler);
}
/// Set trusted verifiers.
pub fn set_trusted_verifiers(&mut self, verifiers: TrustedVerifiers) {
self.trusted_verifiers = verifiers;
}
/// Set minimum verification level for announces.
pub fn set_min_verification_level(&mut self, level: u8) {
self.min_verification_level = level;
}
/// Access the store.
pub fn store(&self) -> &ServiceStore {
&self.store
}
/// Mutable access to store.
pub fn store_mut(&mut self) -> &mut ServiceStore {
&mut self.store
}
/// Check if a service is registered.
pub fn has_service(&self, service_id: u32) -> bool {
self.handlers.contains_key(&service_id)
}
/// Handle an incoming message.
pub fn handle(
&mut self,
message: ServiceMessage,
sender_public_key: Option<[u8; 32]>,
) -> Result<ServiceAction, ServiceError> {
// Basic validation
if message.is_expired() {
return Err(ServiceError::Expired);
}
if message.hop_count > message.max_hops {
return Err(ServiceError::MaxHopsExceeded);
}
// Get handler
let handler = self
.handlers
.get(&message.service_id)
.ok_or(ServiceError::UnknownService(message.service_id))?;
// Validate message with handler
handler.validate(&message)?;
// Verify signature if we have public key
if let Some(pk) = &sender_public_key {
if !message.verify(pk) {
return Err(ServiceError::SignatureInvalid);
}
}
// Check verification level for announces
if message.message_type == MessageType::Announce && self.min_verification_level > 0 {
let level = self
.trusted_verifiers
.highest_level(&message.verifications, &message.sender_address);
if (level as u8) < self.min_verification_level {
return Err(ServiceError::VerificationRequired(self.min_verification_level));
}
}
// Build context
let context = HandlerContext {
capabilities: self.capabilities,
store: &self.store,
trusted_verifiers: &self.trusted_verifiers,
sender_public_key,
};
// Dispatch to handler
let action = handler.handle(&message, &context)?;
// Process action
match &action {
ServiceAction::Store | ServiceAction::StoreAndForward => {
if let Some(pk) = sender_public_key {
self.store.store(message, pk);
}
}
_ => {}
}
Ok(action)
}
/// Query the store for matching announces.
pub fn query(&self, query: &ServiceMessage) -> Vec<&StoredMessage> {
let Some(handler) = self.handlers.get(&query.service_id) else {
return Vec::new();
};
self.store.query(query.service_id, |stored| {
stored.message.message_type == MessageType::Announce
&& handler.matches_query(stored, query)
})
}
/// Get handler name for a service.
pub fn service_name(&self, service_id: u32) -> Option<&str> {
self.handlers.get(&service_id).map(|h| h.name())
}
/// List registered services.
pub fn services(&self) -> Vec<(u32, &str)> {
self.handlers
.iter()
.map(|(&id, h)| (id, h.name()))
.collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{identity::ServiceIdentity, service_ids::FAPP};
struct TestHandler;
impl ServiceHandler for TestHandler {
fn service_id(&self) -> u32 {
FAPP
}
fn name(&self) -> &str {
"Test"
}
fn handle(
&self,
message: &ServiceMessage,
_context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => Ok(ServiceAction::StoreAndForward),
MessageType::Query => Ok(ServiceAction::Handled),
_ => Ok(ServiceAction::Drop),
}
}
fn matches_query(&self, _announce: &StoredMessage, _query: &ServiceMessage) -> bool {
true // Match all for test
}
}
#[test]
fn register_and_handle() {
let mut router = ServiceRouter::new(crate::capabilities::RELAY);
router.register(Box::new(TestHandler));
assert!(router.has_service(FAPP));
assert_eq!(router.service_name(FAPP), Some("Test"));
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, FAPP, vec![], 1);
let action = router.handle(msg.clone(), Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
// Message should be stored
assert_eq!(router.store().len(), 1);
}
#[test]
fn unknown_service_rejected() {
let mut router = ServiceRouter::new(0);
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, 9999, vec![], 1);
let result = router.handle(msg, Some(id.public_key()));
assert!(matches!(result, Err(ServiceError::UnknownService(9999))));
}
#[test]
fn invalid_signature_rejected() {
let mut router = ServiceRouter::new(0);
router.register(Box::new(TestHandler));
let id1 = ServiceIdentity::generate();
let id2 = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id1, FAPP, vec![], 1);
// Pass wrong public key
let result = router.handle(msg, Some(id2.public_key()));
assert!(matches!(result, Err(ServiceError::SignatureInvalid)));
}
}

View File

@@ -0,0 +1,479 @@
//! FAPP — Free Appointment Propagation Protocol.
//!
//! Decentralized psychotherapy appointment discovery.
//!
//! ## Flow
//!
//! 1. Therapist announces available slots (specialism, location, modality).
//! 2. Announcement floods through mesh (TTL-limited, signature-verified).
//! 3. Patient queries for matching slots (specialism, distance).
//! 4. Relays respond with cached matches.
//! 5. Patient reserves slot (E2E encrypted to therapist).
//! 6. Therapist confirms/rejects.
use serde::{Deserialize, Serialize};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::router::{HandlerContext, ServiceAction, ServiceHandler};
use crate::service_ids::FAPP;
use crate::store::StoredMessage;
use crate::wire::{decode_payload, encode_payload};
/// Therapy specialisms.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum Specialism {
GeneralPsychotherapy = 0x01,
CognitiveBehavioral = 0x02,
Psychoanalysis = 0x03,
SystemicTherapy = 0x04,
TraumaFocused = 0x05,
ChildAndAdolescent = 0x06,
CoupleAndFamily = 0x07,
Addiction = 0x08,
Neuropsychology = 0x09,
}
impl TryFrom<u8> for Specialism {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(Self::GeneralPsychotherapy),
0x02 => Ok(Self::CognitiveBehavioral),
0x03 => Ok(Self::Psychoanalysis),
0x04 => Ok(Self::SystemicTherapy),
0x05 => Ok(Self::TraumaFocused),
0x06 => Ok(Self::ChildAndAdolescent),
0x07 => Ok(Self::CoupleAndFamily),
0x08 => Ok(Self::Addiction),
0x09 => Ok(Self::Neuropsychology),
_ => Err(()),
}
}
}
/// Therapy modality.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum Modality {
InPerson = 0x01,
VideoCall = 0x02,
PhoneCall = 0x03,
TextBased = 0x04,
}
/// Slot announcement payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlotAnnounce {
/// Therapist's specialisms (bitfield).
pub specialisms: u16,
/// Modality (bitfield).
pub modality: u8,
/// Postal code (first 3 digits for privacy).
pub postal_prefix: String,
/// Geohash (6 chars, ~1.2km precision).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub geohash: Option<String>,
/// Available slots count.
pub available_slots: u8,
/// Earliest available date (days from epoch).
pub earliest_days: u16,
/// Insurance types accepted (bitfield).
pub insurance: u8,
/// Optional profile URL for verification.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub profile_url: Option<String>,
/// Optional display name.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub display_name: Option<String>,
}
impl SlotAnnounce {
/// Create a new announcement.
pub fn new(specialisms: &[Specialism], modality: Modality, postal_prefix: &str) -> Self {
let spec_bits = specialisms.iter().fold(0u16, |acc, s| acc | (1 << (*s as u8)));
Self {
specialisms: spec_bits,
modality: modality as u8,
postal_prefix: postal_prefix.into(),
geohash: None,
available_slots: 1,
earliest_days: 0,
insurance: 0xFF, // All accepted by default
profile_url: None,
display_name: None,
}
}
/// Set geohash location.
pub fn with_geohash(mut self, geohash: &str) -> Self {
self.geohash = Some(geohash[..6.min(geohash.len())].into());
self
}
/// Set available slots count.
pub fn with_slots(mut self, count: u8) -> Self {
self.available_slots = count;
self
}
/// Set earliest availability.
pub fn with_earliest(mut self, days_from_now: u16) -> Self {
self.earliest_days = days_from_now;
self
}
/// Set profile URL.
pub fn with_profile(mut self, url: &str) -> Self {
self.profile_url = Some(url.into());
self
}
/// Set display name.
pub fn with_name(mut self, name: &str) -> Self {
self.display_name = Some(name.into());
self
}
/// Check if a specialism is offered.
pub fn has_specialism(&self, spec: Specialism) -> bool {
self.specialisms & (1 << (spec as u8)) != 0
}
/// Encode to CBOR bytes.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR bytes.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Insurance types.
pub mod insurance {
pub const PRIVATE: u8 = 0x01;
pub const PUBLIC: u8 = 0x02;
pub const SELF_PAY: u8 = 0x04;
}
/// Slot query payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlotQuery {
/// Desired specialisms (bitfield, any match).
pub specialisms: u16,
/// Postal prefix to search.
pub postal_prefix: String,
/// Max distance in km (optional).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub max_distance_km: Option<u8>,
/// Required modality (0 = any).
pub modality: u8,
/// Max wait in days.
pub max_wait_days: u16,
/// Insurance type required.
pub insurance: u8,
}
impl SlotQuery {
/// Create a query for a specialism in a postal area.
pub fn new(specialism: Specialism, postal_prefix: &str) -> Self {
Self {
specialisms: 1 << (specialism as u8),
postal_prefix: postal_prefix.into(),
max_distance_km: None,
modality: 0,
max_wait_days: 365,
insurance: 0xFF,
}
}
/// Require specific modality.
pub fn with_modality(mut self, modality: Modality) -> Self {
self.modality = modality as u8;
self
}
/// Set max wait time.
pub fn with_max_wait(mut self, days: u16) -> Self {
self.max_wait_days = days;
self
}
/// Check if an announce matches this query.
pub fn matches(&self, announce: &SlotAnnounce) -> bool {
// Specialism overlap
if announce.specialisms & self.specialisms == 0 {
return false;
}
// Postal prefix
if !announce.postal_prefix.starts_with(&self.postal_prefix)
&& !self.postal_prefix.starts_with(&announce.postal_prefix)
{
return false;
}
// Modality
if self.modality != 0 && announce.modality & self.modality == 0 {
return false;
}
// Wait time
if announce.earliest_days > self.max_wait_days {
return false;
}
// Insurance
if announce.insurance & self.insurance == 0 {
return false;
}
// Available slots
announce.available_slots > 0
}
/// Encode to CBOR bytes.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR bytes.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// FAPP service handler.
pub struct FappService {
/// Whether this node is a therapist (can announce).
pub is_provider: bool,
/// Whether this node relays FAPP messages.
pub is_relay: bool,
}
impl FappService {
/// Create a new FAPP handler.
pub fn new(is_provider: bool, is_relay: bool) -> Self {
Self {
is_provider,
is_relay,
}
}
/// Create a relay-only handler.
pub fn relay() -> Self {
Self::new(false, true)
}
/// Create a provider handler.
pub fn provider() -> Self {
Self::new(true, true)
}
}
impl ServiceHandler for FappService {
fn service_id(&self) -> u32 {
FAPP
}
fn name(&self) -> &str {
"FAPP"
}
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => {
// Validate payload
let _announce = SlotAnnounce::from_bytes(&message.payload)?;
// Store and forward if we're a relay
if self.is_relay {
Ok(ServiceAction::StoreAndForward)
} else {
Ok(ServiceAction::Store)
}
}
MessageType::Query => {
// Parse query
let query = SlotQuery::from_bytes(&message.payload)?;
// Find matches in store
let matches: Vec<_> = context
.store
.by_service(FAPP)
.into_iter()
.filter(|stored| {
if stored.message.message_type != MessageType::Announce {
return false;
}
if let Ok(announce) = SlotAnnounce::from_bytes(&stored.message.payload) {
query.matches(&announce)
} else {
false
}
})
.collect();
// If we have matches, we could respond (simplified for now)
if !matches.is_empty() {
// In a real impl, we'd aggregate and send response
Ok(ServiceAction::Handled)
} else if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Reserve | MessageType::Confirm | MessageType::Cancel => {
// E2E encrypted, just forward
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Revoke => {
// Remove from store
Ok(ServiceAction::Handled)
}
_ => Ok(ServiceAction::Drop),
}
}
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
match message.message_type {
MessageType::Announce => {
SlotAnnounce::from_bytes(&message.payload)?;
}
MessageType::Query => {
SlotQuery::from_bytes(&message.payload)?;
}
_ => {}
}
Ok(())
}
fn matches_query(&self, announce: &StoredMessage, query_msg: &ServiceMessage) -> bool {
let Ok(announce_data) = SlotAnnounce::from_bytes(&announce.message.payload) else {
return false;
};
let Ok(query) = SlotQuery::from_bytes(&query_msg.payload) else {
return false;
};
query.matches(&announce_data)
}
}
/// Helper to create a FAPP announce message.
pub fn create_announce(
identity: &crate::ServiceIdentity,
announce: &SlotAnnounce,
sequence: u64,
) -> Result<ServiceMessage, ServiceError> {
let payload = announce.to_bytes()?;
Ok(ServiceMessage::announce(identity, FAPP, payload, sequence))
}
/// Helper to create a FAPP query message.
pub fn create_query(
identity: &crate::ServiceIdentity,
query: &SlotQuery,
) -> Result<ServiceMessage, ServiceError> {
let payload = query.to_bytes()?;
Ok(ServiceMessage::query(identity, FAPP, payload))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn slot_announce_roundtrip() {
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral, Specialism::TraumaFocused],
Modality::VideoCall,
"104",
)
.with_slots(3)
.with_profile("https://therapists.de/dr-mueller");
let bytes = announce.to_bytes().unwrap();
let decoded = SlotAnnounce::from_bytes(&bytes).unwrap();
assert!(decoded.has_specialism(Specialism::CognitiveBehavioral));
assert!(decoded.has_specialism(Specialism::TraumaFocused));
assert!(!decoded.has_specialism(Specialism::Addiction));
assert_eq!(decoded.available_slots, 3);
assert_eq!(
decoded.profile_url,
Some("https://therapists.de/dr-mueller".into())
);
}
#[test]
fn query_matches_announce() {
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral],
Modality::InPerson,
"104",
)
.with_slots(2);
let matching_query = SlotQuery::new(Specialism::CognitiveBehavioral, "104");
assert!(matching_query.matches(&announce));
let wrong_spec = SlotQuery::new(Specialism::Addiction, "104");
assert!(!wrong_spec.matches(&announce));
let wrong_location = SlotQuery::new(Specialism::CognitiveBehavioral, "200");
assert!(!wrong_location.matches(&announce));
}
#[test]
fn create_message_helpers() {
let id = ServiceIdentity::generate();
let announce = SlotAnnounce::new(&[Specialism::GeneralPsychotherapy], Modality::VideoCall, "10");
let msg = create_announce(&id, &announce, 1).unwrap();
assert_eq!(msg.service_id, FAPP);
assert_eq!(msg.message_type, MessageType::Announce);
let query = SlotQuery::new(Specialism::GeneralPsychotherapy, "10");
let msg = create_query(&id, &query).unwrap();
assert_eq!(msg.service_id, FAPP);
assert_eq!(msg.message_type, MessageType::Query);
}
#[test]
fn fapp_handler_processes_announce() {
use crate::router::ServiceRouter;
use crate::capabilities;
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
let id = ServiceIdentity::generate();
let announce = SlotAnnounce::new(&[Specialism::TraumaFocused], Modality::InPerson, "100");
let msg = create_announce(&id, &announce, 1).unwrap();
let action = router.handle(msg.clone(), Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
// Should be stored
assert_eq!(router.store().service_count(FAPP), 1);
}
}

View File

@@ -0,0 +1,489 @@
//! Housing Service — Decentralized room/apartment sharing.
//!
//! Demonstrates how a second service can be built on the mesh layer.
//!
//! ## Flow
//!
//! 1. Landlord announces available room (type, size, price, location).
//! 2. Announcement floods through mesh.
//! 3. Seeker queries for matching listings.
//! 4. Relays respond with cached matches.
//! 5. Seeker reserves viewing slot (E2E encrypted).
//! 6. Landlord confirms/rejects.
use serde::{Deserialize, Serialize};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::router::{HandlerContext, ServiceAction, ServiceHandler};
use crate::service_ids::HOUSING;
use crate::store::StoredMessage;
use crate::wire::{decode_payload, encode_payload};
/// Listing type.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum ListingType {
Room = 0x01,
SharedFlat = 0x02,
Apartment = 0x03,
House = 0x04,
Studio = 0x05,
Sublet = 0x06,
}
impl TryFrom<u8> for ListingType {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(Self::Room),
0x02 => Ok(Self::SharedFlat),
0x03 => Ok(Self::Apartment),
0x04 => Ok(Self::House),
0x05 => Ok(Self::Studio),
0x06 => Ok(Self::Sublet),
_ => Err(()),
}
}
}
/// Amenities bitfield.
pub mod amenities {
pub const FURNISHED: u16 = 0x0001;
pub const BALCONY: u16 = 0x0002;
pub const PARKING: u16 = 0x0004;
pub const PETS_ALLOWED: u16 = 0x0008;
pub const WASHING_MACHINE: u16 = 0x0010;
pub const DISHWASHER: u16 = 0x0020;
pub const ELEVATOR: u16 = 0x0040;
pub const GARDEN: u16 = 0x0080;
pub const INTERNET: u16 = 0x0100;
pub const HEATING_INCLUDED: u16 = 0x0200;
}
/// Room/listing announcement.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ListingAnnounce {
/// Type of listing.
pub listing_type: u8,
/// Size in square meters.
pub size_sqm: u16,
/// Monthly rent in cents (EUR).
pub rent_cents: u32,
/// Postal prefix (3 digits).
pub postal_prefix: String,
/// Geohash for location (6 chars).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub geohash: Option<String>,
/// Number of rooms (0 for studio).
pub rooms: u8,
/// Available from (days from epoch).
pub available_from_days: u16,
/// Minimum rental period in months (0 = unlimited).
pub min_months: u8,
/// Maximum rental period in months (0 = unlimited).
pub max_months: u8,
/// Amenities bitfield.
pub amenities: u16,
/// Optional title.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
/// Optional external listing URL.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub listing_url: Option<String>,
}
impl ListingAnnounce {
/// Create a new listing.
pub fn new(listing_type: ListingType, size_sqm: u16, rent_euros: u32, postal_prefix: &str) -> Self {
Self {
listing_type: listing_type as u8,
size_sqm,
rent_cents: rent_euros * 100,
postal_prefix: postal_prefix.into(),
geohash: None,
rooms: 1,
available_from_days: 0,
min_months: 0,
max_months: 0,
amenities: 0,
title: None,
listing_url: None,
}
}
/// Set rooms count.
pub fn with_rooms(mut self, rooms: u8) -> Self {
self.rooms = rooms;
self
}
/// Set geohash.
pub fn with_geohash(mut self, geohash: &str) -> Self {
self.geohash = Some(geohash[..6.min(geohash.len())].into());
self
}
/// Set amenities.
pub fn with_amenities(mut self, amenities: u16) -> Self {
self.amenities = amenities;
self
}
/// Set title.
pub fn with_title(mut self, title: &str) -> Self {
self.title = Some(title.into());
self
}
/// Set minimum/maximum rental period.
pub fn with_term(mut self, min_months: u8, max_months: u8) -> Self {
self.min_months = min_months;
self.max_months = max_months;
self
}
/// Check if has amenity.
pub fn has_amenity(&self, amenity: u16) -> bool {
self.amenities & amenity != 0
}
/// Get rent in euros.
pub fn rent_euros(&self) -> u32 {
self.rent_cents / 100
}
/// Encode to CBOR.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Housing query.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ListingQuery {
/// Desired listing types (bitfield).
pub listing_types: u8,
/// Postal prefix.
pub postal_prefix: String,
/// Min size in sqm.
pub min_size_sqm: u16,
/// Max rent in cents.
pub max_rent_cents: u32,
/// Min rooms.
pub min_rooms: u8,
/// Required amenities (all must match).
pub required_amenities: u16,
/// Max move-in days.
pub max_move_in_days: u16,
}
impl ListingQuery {
/// Create a simple query.
pub fn new(postal_prefix: &str, max_rent_euros: u32) -> Self {
Self {
listing_types: 0xFF, // Any type
postal_prefix: postal_prefix.into(),
min_size_sqm: 0,
max_rent_cents: max_rent_euros * 100,
min_rooms: 0,
required_amenities: 0,
max_move_in_days: 365,
}
}
/// Filter by type.
pub fn with_type(mut self, listing_type: ListingType) -> Self {
self.listing_types = 1 << (listing_type as u8);
self
}
/// Require minimum size.
pub fn with_min_size(mut self, sqm: u16) -> Self {
self.min_size_sqm = sqm;
self
}
/// Require minimum rooms.
pub fn with_min_rooms(mut self, rooms: u8) -> Self {
self.min_rooms = rooms;
self
}
/// Require amenities.
pub fn with_amenities(mut self, amenities: u16) -> Self {
self.required_amenities = amenities;
self
}
/// Check if listing matches.
pub fn matches(&self, listing: &ListingAnnounce) -> bool {
// Type match
if self.listing_types != 0xFF && (self.listing_types & (1 << listing.listing_type) == 0) {
return false;
}
// Location
if !listing.postal_prefix.starts_with(&self.postal_prefix)
&& !self.postal_prefix.starts_with(&listing.postal_prefix)
{
return false;
}
// Size
if listing.size_sqm < self.min_size_sqm {
return false;
}
// Rent
if listing.rent_cents > self.max_rent_cents {
return false;
}
// Rooms
if listing.rooms < self.min_rooms {
return false;
}
// Amenities (all required must be present)
if listing.amenities & self.required_amenities != self.required_amenities {
return false;
}
// Availability
listing.available_from_days <= self.max_move_in_days
}
/// Encode to CBOR.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Housing service handler.
pub struct HousingService {
pub is_provider: bool,
pub is_relay: bool,
}
impl HousingService {
/// Create a new handler.
pub fn new(is_provider: bool, is_relay: bool) -> Self {
Self {
is_provider,
is_relay,
}
}
/// Create a relay-only handler.
pub fn relay() -> Self {
Self::new(false, true)
}
/// Create a provider handler.
pub fn provider() -> Self {
Self::new(true, true)
}
}
impl ServiceHandler for HousingService {
fn service_id(&self) -> u32 {
HOUSING
}
fn name(&self) -> &str {
"Housing"
}
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => {
let _listing = ListingAnnounce::from_bytes(&message.payload)?;
if self.is_relay {
Ok(ServiceAction::StoreAndForward)
} else {
Ok(ServiceAction::Store)
}
}
MessageType::Query => {
let query = ListingQuery::from_bytes(&message.payload)?;
let _matches: Vec<_> = context
.store
.by_service(HOUSING)
.into_iter()
.filter(|stored| {
if stored.message.message_type != MessageType::Announce {
return false;
}
if let Ok(listing) = ListingAnnounce::from_bytes(&stored.message.payload) {
query.matches(&listing)
} else {
false
}
})
.collect();
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Reserve | MessageType::Confirm | MessageType::Cancel => {
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Revoke => Ok(ServiceAction::Handled),
_ => Ok(ServiceAction::Drop),
}
}
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
match message.message_type {
MessageType::Announce => {
ListingAnnounce::from_bytes(&message.payload)?;
}
MessageType::Query => {
ListingQuery::from_bytes(&message.payload)?;
}
_ => {}
}
Ok(())
}
fn matches_query(&self, listing: &StoredMessage, query_msg: &ServiceMessage) -> bool {
let Ok(listing_data) = ListingAnnounce::from_bytes(&listing.message.payload) else {
return false;
};
let Ok(query) = ListingQuery::from_bytes(&query_msg.payload) else {
return false;
};
query.matches(&listing_data)
}
}
/// Helper to create a housing announce.
pub fn create_announce(
identity: &crate::ServiceIdentity,
listing: &ListingAnnounce,
sequence: u64,
) -> Result<ServiceMessage, ServiceError> {
let payload = listing.to_bytes()?;
Ok(ServiceMessage::announce(identity, HOUSING, payload, sequence))
}
/// Helper to create a housing query.
pub fn create_query(
identity: &crate::ServiceIdentity,
query: &ListingQuery,
) -> Result<ServiceMessage, ServiceError> {
let payload = query.to_bytes()?;
Ok(ServiceMessage::query(identity, HOUSING, payload))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn listing_roundtrip() {
let listing = ListingAnnounce::new(ListingType::Apartment, 65, 850, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY)
.with_title("Cozy 2-room in Kreuzberg");
let bytes = listing.to_bytes().unwrap();
let decoded = ListingAnnounce::from_bytes(&bytes).unwrap();
assert_eq!(decoded.size_sqm, 65);
assert_eq!(decoded.rent_euros(), 850);
assert_eq!(decoded.rooms, 2);
assert!(decoded.has_amenity(amenities::FURNISHED));
assert!(decoded.has_amenity(amenities::BALCONY));
assert!(!decoded.has_amenity(amenities::PARKING));
}
#[test]
fn query_matches() {
let listing = ListingAnnounce::new(ListingType::Apartment, 50, 700, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED);
// Basic match
let query = ListingQuery::new("104", 800);
assert!(query.matches(&listing));
// Too expensive for query
let cheap_query = ListingQuery::new("104", 500);
assert!(!cheap_query.matches(&listing));
// Wrong location
let wrong_loc = ListingQuery::new("200", 800);
assert!(!wrong_loc.matches(&listing));
// Size requirement
let big_query = ListingQuery::new("104", 800).with_min_size(60);
assert!(!big_query.matches(&listing));
// Amenity requirement
let needs_parking = ListingQuery::new("104", 800).with_amenities(amenities::PARKING);
assert!(!needs_parking.matches(&listing));
}
#[test]
fn create_message_helpers() {
let id = ServiceIdentity::generate();
let listing = ListingAnnounce::new(ListingType::Room, 20, 400, "100");
let msg = create_announce(&id, &listing, 1).unwrap();
assert_eq!(msg.service_id, HOUSING);
assert_eq!(msg.message_type, MessageType::Announce);
let query = ListingQuery::new("100", 500);
let msg = create_query(&id, &query).unwrap();
assert_eq!(msg.service_id, HOUSING);
assert_eq!(msg.message_type, MessageType::Query);
}
#[test]
fn housing_handler_processes_listing() {
use crate::capabilities;
use crate::router::ServiceRouter;
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(HousingService::relay()));
let id = ServiceIdentity::generate();
let listing = ListingAnnounce::new(ListingType::SharedFlat, 15, 350, "100");
let msg = create_announce(&id, &listing, 1).unwrap();
let action = router.handle(msg, Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
assert_eq!(router.store().service_count(HOUSING), 1);
}
}

View File

@@ -0,0 +1,4 @@
//! Built-in service implementations.
pub mod fapp;
pub mod housing;

View File

@@ -0,0 +1,406 @@
//! In-memory message store with eviction policies.
use std::collections::HashMap;
use std::time::{SystemTime, UNIX_EPOCH};
use crate::message::ServiceMessage;
/// Configuration for the message store.
#[derive(Debug, Clone)]
pub struct StoreConfig {
/// Maximum messages per service.
pub max_per_service: usize,
/// Maximum messages per sender (per service).
pub max_per_sender: usize,
/// Maximum total messages.
pub max_total: usize,
/// Prune interval in seconds.
pub prune_interval_secs: u64,
}
impl Default for StoreConfig {
fn default() -> Self {
Self {
max_per_service: 10_000,
max_per_sender: 100,
max_total: 50_000,
prune_interval_secs: 300,
}
}
}
/// A stored message with metadata.
#[derive(Debug, Clone)]
pub struct StoredMessage {
pub message: ServiceMessage,
/// Sender's public key (needed for verification).
pub sender_public_key: [u8; 32],
/// When we stored this message.
pub stored_at: u64,
}
/// Generic service message store.
///
/// Organized by service_id, then by sender_address, then by message_id.
pub struct ServiceStore {
config: StoreConfig,
/// service_id -> sender_address -> message_id -> StoredMessage
messages: HashMap<u32, HashMap<[u8; 16], HashMap<[u8; 16], StoredMessage>>>,
/// Total message count.
total_count: usize,
/// Last prune timestamp.
last_prune: u64,
}
impl ServiceStore {
/// Create a new store with default config.
pub fn new() -> Self {
Self::with_config(StoreConfig::default())
}
/// Create with custom config.
pub fn with_config(config: StoreConfig) -> Self {
Self {
config,
messages: HashMap::new(),
total_count: 0,
last_prune: 0,
}
}
/// Store a message, returning true if it was new.
pub fn store(&mut self, message: ServiceMessage, sender_public_key: [u8; 32]) -> bool {
// Prune if interval passed
self.maybe_prune();
let service_id = message.service_id;
let sender_address = message.sender_address;
let message_id = message.id;
// Check per-service limit and evict if needed
{
let service_count: usize = self.messages
.get(&service_id)
.map(|s| s.values().map(|m| m.len()).sum())
.unwrap_or(0);
if service_count >= self.config.max_per_service {
self.evict_oldest_in_service(service_id);
}
}
// Check per-sender limit and evict if needed
{
let sender_count = self.messages
.get(&service_id)
.and_then(|s| s.get(&sender_address))
.map(|m| m.len())
.unwrap_or(0);
if sender_count >= self.config.max_per_sender {
self.evict_oldest_from_sender(service_id, sender_address);
}
}
// Get or create maps
let service_map = self.messages.entry(service_id).or_default();
let sender_map = service_map.entry(sender_address).or_default();
// Check for existing message
let is_new_or_update = if let Some(existing) = sender_map.get(&message_id) {
// Existing: only update if higher sequence
if message.sequence <= existing.message.sequence {
return false;
}
// This is an update, not a new message
false
} else {
// New message
true
};
let stored_at = now();
sender_map.insert(
message_id,
StoredMessage {
message,
sender_public_key,
stored_at,
},
);
if is_new_or_update {
self.total_count += 1;
}
// Return true for both new messages and updates
true
}
/// Get a message by service, sender, and ID.
pub fn get(
&self,
service_id: u32,
sender_address: &[u8; 16],
message_id: &[u8; 16],
) -> Option<&StoredMessage> {
self.messages
.get(&service_id)?
.get(sender_address)?
.get(message_id)
}
/// Get all messages from a sender in a service.
pub fn by_sender(&self, service_id: u32, sender_address: &[u8; 16]) -> Vec<&StoredMessage> {
self.messages
.get(&service_id)
.and_then(|s| s.get(sender_address))
.map(|m| m.values().collect())
.unwrap_or_default()
}
/// Get all messages in a service.
pub fn by_service(&self, service_id: u32) -> Vec<&StoredMessage> {
self.messages
.get(&service_id)
.map(|s| s.values().flat_map(|m| m.values()).collect())
.unwrap_or_default()
}
/// Query messages with a predicate.
pub fn query<F>(&self, service_id: u32, predicate: F) -> Vec<&StoredMessage>
where
F: Fn(&StoredMessage) -> bool,
{
self.by_service(service_id)
.into_iter()
.filter(|m| predicate(m))
.collect()
}
/// Remove a specific message.
pub fn remove(
&mut self,
service_id: u32,
sender_address: &[u8; 16],
message_id: &[u8; 16],
) -> Option<StoredMessage> {
let result = self
.messages
.get_mut(&service_id)?
.get_mut(sender_address)?
.remove(message_id);
if result.is_some() {
self.total_count = self.total_count.saturating_sub(1);
}
result
}
/// Remove all messages from a sender.
pub fn remove_sender(&mut self, service_id: u32, sender_address: &[u8; 16]) -> usize {
let count = self
.messages
.get_mut(&service_id)
.and_then(|s| s.remove(sender_address))
.map(|m| m.len())
.unwrap_or(0);
self.total_count = self.total_count.saturating_sub(count);
count
}
/// Prune expired messages.
pub fn prune_expired(&mut self) -> usize {
let now = now();
let mut removed = 0;
for service_map in self.messages.values_mut() {
for sender_map in service_map.values_mut() {
let expired: Vec<[u8; 16]> = sender_map
.iter()
.filter(|(_, m)| m.message.is_expired())
.map(|(id, _)| *id)
.collect();
for id in expired {
sender_map.remove(&id);
removed += 1;
}
}
}
self.total_count = self.total_count.saturating_sub(removed);
self.last_prune = now;
removed
}
/// Get total message count.
pub fn len(&self) -> usize {
self.total_count
}
/// Check if empty.
pub fn is_empty(&self) -> bool {
self.total_count == 0
}
/// Get count by service.
pub fn service_count(&self, service_id: u32) -> usize {
self.messages
.get(&service_id)
.map(|s| s.values().map(|m| m.len()).sum())
.unwrap_or(0)
}
/// Run prune if interval passed.
fn maybe_prune(&mut self) {
let now = now();
if now.saturating_sub(self.last_prune) >= self.config.prune_interval_secs {
self.prune_expired();
}
}
/// Evict oldest message in a service.
fn evict_oldest_in_service(&mut self, service_id: u32) {
let Some(service_map) = self.messages.get_mut(&service_id) else {
return;
};
let mut oldest: Option<([u8; 16], [u8; 16], u64)> = None;
for (sender, msgs) in service_map.iter() {
for (id, stored) in msgs.iter() {
match oldest {
Some((_, _, ts)) if stored.message.timestamp < ts => {
oldest = Some((*sender, *id, stored.message.timestamp));
}
None => {
oldest = Some((*sender, *id, stored.message.timestamp));
}
_ => {}
}
}
}
if let Some((sender, id, _)) = oldest {
if let Some(sender_map) = service_map.get_mut(&sender) {
sender_map.remove(&id);
self.total_count = self.total_count.saturating_sub(1);
}
}
}
/// Evict oldest message from a sender.
fn evict_oldest_from_sender(&mut self, service_id: u32, sender_address: [u8; 16]) {
let Some(sender_map) = self
.messages
.get_mut(&service_id)
.and_then(|s| s.get_mut(&sender_address))
else {
return;
};
let oldest = sender_map
.iter()
.min_by_key(|(_, m)| m.message.timestamp)
.map(|(id, _)| *id);
if let Some(id) = oldest {
sender_map.remove(&id);
self.total_count = self.total_count.saturating_sub(1);
}
}
}
impl Default for ServiceStore {
fn default() -> Self {
Self::new()
}
}
fn now() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{identity::ServiceIdentity, message::ServiceMessage, service_ids::FAPP};
fn make_message(id: &ServiceIdentity, seq: u64) -> ServiceMessage {
ServiceMessage::announce(id, FAPP, b"test".to_vec(), seq)
}
#[test]
fn store_and_retrieve() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg = make_message(&id, 1);
assert!(store.store(msg.clone(), id.public_key()));
assert_eq!(store.len(), 1);
let retrieved = store.get(FAPP, &id.address(), &msg.id);
assert!(retrieved.is_some());
}
#[test]
fn duplicate_rejected() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg = make_message(&id, 1);
assert!(store.store(msg.clone(), id.public_key()));
assert!(!store.store(msg.clone(), id.public_key())); // Duplicate
assert_eq!(store.len(), 1);
}
#[test]
fn higher_sequence_updates() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg1 = make_message(&id, 1);
let mut msg2 = make_message(&id, 2);
msg2.id = msg1.id; // Same ID
store.store(msg1.clone(), id.public_key());
assert!(store.store(msg2.clone(), id.public_key())); // Updates
let retrieved = store.get(FAPP, &id.address(), &msg1.id).unwrap();
assert_eq!(retrieved.message.sequence, 2);
}
#[test]
fn query_by_sender() {
let mut store = ServiceStore::new();
let id1 = ServiceIdentity::generate();
let id2 = ServiceIdentity::generate();
store.store(make_message(&id1, 1), id1.public_key());
store.store(make_message(&id1, 2), id1.public_key());
store.store(make_message(&id2, 1), id2.public_key());
let sender1_msgs = store.by_sender(FAPP, &id1.address());
assert_eq!(sender1_msgs.len(), 2);
let sender2_msgs = store.by_sender(FAPP, &id2.address());
assert_eq!(sender2_msgs.len(), 1);
}
#[test]
fn remove_sender() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
store.store(make_message(&id, 1), id.public_key());
store.store(make_message(&id, 2), id.public_key());
assert_eq!(store.len(), 2);
let removed = store.remove_sender(FAPP, &id.address());
assert_eq!(removed, 2);
assert_eq!(store.len(), 0);
}
}

View File

@@ -0,0 +1,290 @@
//! Verification framework for building trust in decentralized services.
//!
//! Verification levels:
//! - 0: None (bare announce)
//! - 1: Self-asserted (profile URL, metadata)
//! - 2: Endorsed by trusted peers
//! - 3: Registry-verified (KBV for therapists, trade registry for craftsmen)
use serde::{Deserialize, Serialize};
use crate::identity::ServiceIdentity;
/// Verification levels (higher = more trusted).
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default)]
#[repr(u8)]
pub enum VerificationLevel {
#[default]
None = 0,
SelfAsserted = 1,
PeerEndorsed = 2,
RegistryVerified = 3,
}
impl From<u8> for VerificationLevel {
fn from(value: u8) -> Self {
match value {
1 => VerificationLevel::SelfAsserted,
2 => VerificationLevel::PeerEndorsed,
3.. => VerificationLevel::RegistryVerified,
_ => VerificationLevel::None,
}
}
}
/// A verification attestation attached to a service message.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Verification {
/// Verification level.
pub level: u8,
/// Verifier's mesh address.
pub verifier_address: [u8; 16],
/// What is being verified (e.g., "license", "identity").
pub claim: String,
/// Optional external reference (URL, registry ID).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub reference: Option<String>,
/// Signature over (level || sender_address || claim).
pub signature: Vec<u8>,
/// Timestamp of verification.
pub timestamp: u64,
/// Optional expiry timestamp.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expires: Option<u64>,
}
impl Verification {
/// Create a new peer endorsement.
pub fn peer_endorsement(
verifier: &ServiceIdentity,
subject_address: &[u8; 16],
claim: impl Into<String>,
) -> Self {
Self::new(
verifier,
VerificationLevel::PeerEndorsed,
subject_address,
claim,
None,
)
}
/// Create a registry verification.
pub fn registry(
verifier: &ServiceIdentity,
subject_address: &[u8; 16],
claim: impl Into<String>,
reference: impl Into<String>,
) -> Self {
Self::new(
verifier,
VerificationLevel::RegistryVerified,
subject_address,
claim,
Some(reference.into()),
)
}
/// Create a new verification.
pub fn new(
verifier: &ServiceIdentity,
level: VerificationLevel,
subject_address: &[u8; 16],
claim: impl Into<String>,
reference: Option<String>,
) -> Self {
use std::time::{SystemTime, UNIX_EPOCH};
let claim = claim.into();
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let signable = Self::signable_bytes(level as u8, subject_address, &claim);
let signature = verifier.sign(&signable).to_vec();
Self {
level: level as u8,
verifier_address: verifier.address(),
claim,
reference,
signature,
timestamp,
expires: None,
}
}
/// Set expiry time.
pub fn with_expiry(mut self, expires: u64) -> Self {
self.expires = Some(expires);
self
}
/// Create signable bytes.
fn signable_bytes(level: u8, subject_address: &[u8; 16], claim: &str) -> Vec<u8> {
let mut buf = Vec::with_capacity(17 + claim.len());
buf.push(level);
buf.extend_from_slice(subject_address);
buf.extend_from_slice(claim.as_bytes());
buf
}
/// Verify this attestation.
pub fn verify(&self, verifier_public_key: &[u8; 32], subject_address: &[u8; 16]) -> bool {
use crate::identity::compute_address;
// Verify verifier address matches key
if compute_address(verifier_public_key) != self.verifier_address {
return false;
}
// Check expiry
if let Some(expires) = self.expires {
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
if now > expires {
return false;
}
}
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = Self::signable_bytes(self.level, subject_address, &self.claim);
ServiceIdentity::verify(verifier_public_key, &signable, &sig)
}
}
/// Set of known trusted verifiers (registries, endorsers).
#[derive(Default)]
pub struct TrustedVerifiers {
/// Known public keys with their trust level.
verifiers: Vec<TrustedVerifier>,
}
/// A trusted verifier entry.
#[derive(Clone)]
pub struct TrustedVerifier {
pub public_key: [u8; 32],
pub address: [u8; 16],
pub name: String,
pub max_level: VerificationLevel,
}
impl TrustedVerifiers {
/// Create empty set.
pub fn new() -> Self {
Self::default()
}
/// Add a trusted verifier.
pub fn add(
&mut self,
public_key: [u8; 32],
name: impl Into<String>,
max_level: VerificationLevel,
) {
use crate::identity::compute_address;
self.verifiers.push(TrustedVerifier {
public_key,
address: compute_address(&public_key),
name: name.into(),
max_level,
});
}
/// Find a verifier by address.
pub fn find_by_address(&self, address: &[u8; 16]) -> Option<&TrustedVerifier> {
self.verifiers.iter().find(|v| &v.address == address)
}
/// Verify a verification against known trusted verifiers.
/// Returns the effective level (or 0 if not trusted).
pub fn check(&self, verification: &Verification, subject_address: &[u8; 16]) -> u8 {
let Some(verifier) = self.find_by_address(&verification.verifier_address) else {
return 0;
};
// Level cannot exceed verifier's max
let claimed_level = verification.level.min(verifier.max_level as u8);
// Actually verify the signature
if verification.verify(&verifier.public_key, subject_address) {
claimed_level
} else {
0
}
}
/// Get the highest trusted verification level from a list.
pub fn highest_level(
&self,
verifications: &[Verification],
subject_address: &[u8; 16],
) -> VerificationLevel {
verifications
.iter()
.map(|v| self.check(v, subject_address))
.max()
.map(VerificationLevel::from)
.unwrap_or(VerificationLevel::None)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn peer_endorsement_roundtrip() {
let verifier = ServiceIdentity::generate();
let subject_address = [1u8; 16];
let v = Verification::peer_endorsement(&verifier, &subject_address, "good_actor");
assert!(v.verify(&verifier.public_key(), &subject_address));
assert_eq!(v.level, VerificationLevel::PeerEndorsed as u8);
}
#[test]
fn trusted_verifiers_check() {
let verifier = ServiceIdentity::generate();
let subject_address = [2u8; 16];
let mut trusted = TrustedVerifiers::new();
trusted.add(verifier.public_key(), "Test Registry", VerificationLevel::RegistryVerified);
let v = Verification::registry(&verifier, &subject_address, "licensed", "REG-12345");
let level = trusted.check(&v, &subject_address);
assert_eq!(level, VerificationLevel::RegistryVerified as u8);
}
#[test]
fn untrusted_verifier_returns_zero() {
let verifier = ServiceIdentity::generate();
let subject_address = [3u8; 16];
let trusted = TrustedVerifiers::new(); // Empty
let v = Verification::registry(&verifier, &subject_address, "licensed", "REG-999");
let level = trusted.check(&v, &subject_address);
assert_eq!(level, 0);
}
#[test]
fn expired_verification_fails() {
let verifier = ServiceIdentity::generate();
let subject_address = [4u8; 16];
let v = Verification::peer_endorsement(&verifier, &subject_address, "trusted")
.with_expiry(1); // Expired in 1970
assert!(!v.verify(&verifier.public_key(), &subject_address));
}
}

View File

@@ -0,0 +1,259 @@
//! Wire format for service messages.
//!
//! Binary format for efficient network transmission.
//! Uses CBOR for payload encoding.
use std::io::{Cursor, Read};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
/// Wire message header (fixed 64 bytes).
///
/// ```text
/// ┌─────────────────────────────────────────────────────┐
/// │ 0-3 │ service_id (u32 LE) │
/// │ 4 │ message_type (u8) │
/// │ 5 │ version (u8) │
/// │ 6-7 │ flags (u16 LE, reserved) │
/// │ 8-23 │ message_id (16 bytes) │
/// │ 24-39 │ sender_address (16 bytes) │
/// │ 40-47 │ sequence (u64 LE) │
/// │ 48-49 │ ttl_hours (u16 LE) │
/// │ 50-57 │ timestamp (u64 LE) │
/// │ 58 │ hop_count (u8) │
/// │ 59 │ max_hops (u8) │
/// │ 60-63 │ payload_len (u32 LE) │
/// └─────────────────────────────────────────────────────┘
/// Followed by:
/// │ 64-... │ signature (64 bytes) │
/// │ signature_end-.. │ payload (payload_len bytes) │
/// │ payload_end-.. │ verifications (CBOR, optional) │
/// ```
const HEADER_SIZE: usize = 64;
const SIGNATURE_SIZE: usize = 64;
/// Encode a ServiceMessage to bytes.
pub fn encode(msg: &ServiceMessage) -> Result<Vec<u8>, ServiceError> {
let verifications_bytes = if msg.verifications.is_empty() {
Vec::new()
} else {
let mut buf = Vec::new();
ciborium::into_writer(&msg.verifications, &mut buf)?;
buf
};
let total_size = HEADER_SIZE + SIGNATURE_SIZE + msg.payload.len() + verifications_bytes.len();
let mut buf = Vec::with_capacity(total_size);
// Header
buf.extend_from_slice(&msg.service_id.to_le_bytes()); // 0-3
buf.push(msg.message_type as u8); // 4
buf.push(msg.version); // 5
buf.extend_from_slice(&0u16.to_le_bytes()); // 6-7 flags (reserved)
buf.extend_from_slice(&msg.id); // 8-23
buf.extend_from_slice(&msg.sender_address); // 24-39
buf.extend_from_slice(&msg.sequence.to_le_bytes()); // 40-47
buf.extend_from_slice(&msg.ttl_hours.to_le_bytes()); // 48-49
buf.extend_from_slice(&msg.timestamp.to_le_bytes()); // 50-57
buf.push(msg.hop_count); // 58
buf.push(msg.max_hops); // 59
buf.extend_from_slice(&(msg.payload.len() as u32).to_le_bytes()); // 60-63
// Signature
if msg.signature.len() != SIGNATURE_SIZE {
return Err(ServiceError::InvalidFormat(format!(
"signature must be {} bytes, got {}",
SIGNATURE_SIZE,
msg.signature.len()
)));
}
buf.extend_from_slice(&msg.signature);
// Payload
buf.extend_from_slice(&msg.payload);
// Verifications (optional)
buf.extend_from_slice(&verifications_bytes);
Ok(buf)
}
/// Decode bytes to a ServiceMessage.
pub fn decode(data: &[u8]) -> Result<ServiceMessage, ServiceError> {
if data.len() < HEADER_SIZE + SIGNATURE_SIZE {
return Err(ServiceError::InvalidFormat("message too short".into()));
}
let mut cursor = Cursor::new(data);
let mut buf4 = [0u8; 4];
let mut buf8 = [0u8; 8];
let mut buf16 = [0u8; 16];
let mut buf2 = [0u8; 2];
// Read header
cursor.read_exact(&mut buf4)?;
let service_id = u32::from_le_bytes(buf4);
let mut type_byte = [0u8; 1];
cursor.read_exact(&mut type_byte)?;
let message_type = MessageType::try_from(type_byte[0])
.map_err(|_| ServiceError::InvalidFormat("invalid message type".into()))?;
cursor.read_exact(&mut type_byte)?;
let version = type_byte[0];
cursor.read_exact(&mut buf2)?; // flags (ignored)
cursor.read_exact(&mut buf16)?;
let id = buf16;
cursor.read_exact(&mut buf16)?;
let sender_address = buf16;
cursor.read_exact(&mut buf8)?;
let sequence = u64::from_le_bytes(buf8);
cursor.read_exact(&mut buf2)?;
let ttl_hours = u16::from_le_bytes(buf2);
cursor.read_exact(&mut buf8)?;
let timestamp = u64::from_le_bytes(buf8);
cursor.read_exact(&mut type_byte)?;
let hop_count = type_byte[0];
cursor.read_exact(&mut type_byte)?;
let max_hops = type_byte[0];
cursor.read_exact(&mut buf4)?;
let payload_len = u32::from_le_bytes(buf4) as usize;
// Read signature
let mut signature = vec![0u8; SIGNATURE_SIZE];
cursor.read_exact(&mut signature)?;
// Read payload
if data.len() < HEADER_SIZE + SIGNATURE_SIZE + payload_len {
return Err(ServiceError::InvalidFormat("payload truncated".into()));
}
let mut payload = vec![0u8; payload_len];
cursor.read_exact(&mut payload)?;
// Read verifications (remaining bytes)
let verifications = if cursor.position() < data.len() as u64 {
let mut remaining = Vec::new();
cursor.read_to_end(&mut remaining)?;
if remaining.is_empty() {
Vec::new()
} else {
ciborium::from_reader(&remaining[..])
.map_err(|e| ServiceError::Serialization(e.to_string()))?
}
} else {
Vec::new()
};
Ok(ServiceMessage {
service_id,
message_type,
version,
id,
sender_address,
payload,
signature,
verifications,
sequence,
ttl_hours,
timestamp,
hop_count,
max_hops,
})
}
// Implement std::io::Error conversion for Read trait
impl From<std::io::Error> for ServiceError {
fn from(e: std::io::Error) -> Self {
ServiceError::InvalidFormat(e.to_string())
}
}
/// Encode a payload struct to CBOR.
pub fn encode_payload<T: serde::Serialize>(payload: &T) -> Result<Vec<u8>, ServiceError> {
let mut buf = Vec::new();
ciborium::into_writer(payload, &mut buf)?;
Ok(buf)
}
/// Decode a payload from CBOR.
pub fn decode_payload<T: serde::de::DeserializeOwned>(data: &[u8]) -> Result<T, ServiceError> {
ciborium::from_reader(data).map_err(|e| ServiceError::Serialization(e.to_string()))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
use crate::service_ids::FAPP;
use crate::verification::Verification;
#[test]
fn roundtrip_simple() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, FAPP, b"hello world".to_vec(), 42);
let encoded = encode(&msg).unwrap();
let decoded = decode(&encoded).unwrap();
assert_eq!(decoded.service_id, FAPP);
assert_eq!(decoded.message_type, MessageType::Announce);
assert_eq!(decoded.sequence, 42);
assert_eq!(decoded.payload, b"hello world");
assert_eq!(decoded.signature, msg.signature);
}
#[test]
fn roundtrip_with_verifications() {
let id = ServiceIdentity::generate();
let verifier = ServiceIdentity::generate();
let mut msg = ServiceMessage::announce(&id, FAPP, b"payload".to_vec(), 1);
msg.add_verification(Verification::peer_endorsement(
&verifier,
&id.address(),
"trusted",
));
let encoded = encode(&msg).unwrap();
let decoded = decode(&encoded).unwrap();
assert_eq!(decoded.verifications.len(), 1);
assert_eq!(decoded.verifications[0].claim, "trusted");
}
#[test]
fn payload_codec() {
#[derive(serde::Serialize, serde::Deserialize, Debug, PartialEq)]
struct TestPayload {
name: String,
value: i32,
}
let payload = TestPayload {
name: "test".into(),
value: 123,
};
let encoded = encode_payload(&payload).unwrap();
let decoded: TestPayload = decode_payload(&encoded).unwrap();
assert_eq!(payload, decoded);
}
#[test]
fn truncated_rejected() {
let result = decode(&[0u8; 10]);
assert!(matches!(result, Err(ServiceError::InvalidFormat(_))));
}
}

View File

@@ -119,6 +119,8 @@ pub enum Command {
MeshRoute, MeshRoute,
MeshIdentity, MeshIdentity,
MeshStore, MeshStore,
MeshTrace { address: String },
MeshStats,
// Security / crypto // Security / crypto
Verify { username: String }, Verify { username: String },
@@ -187,6 +189,8 @@ impl Command {
Command::MeshRoute => Some(SlashCommand::MeshRoute), Command::MeshRoute => Some(SlashCommand::MeshRoute),
Command::MeshIdentity => Some(SlashCommand::MeshIdentity), Command::MeshIdentity => Some(SlashCommand::MeshIdentity),
Command::MeshStore => Some(SlashCommand::MeshStore), Command::MeshStore => Some(SlashCommand::MeshStore),
Command::MeshTrace { address } => Some(SlashCommand::MeshTrace { address }),
Command::MeshStats => Some(SlashCommand::MeshStats),
Command::Verify { username } => Some(SlashCommand::Verify { username }), Command::Verify { username } => Some(SlashCommand::Verify { username }),
Command::UpdateKey => Some(SlashCommand::UpdateKey), Command::UpdateKey => Some(SlashCommand::UpdateKey),
Command::Typing => Some(SlashCommand::Typing), Command::Typing => Some(SlashCommand::Typing),
@@ -348,6 +352,8 @@ fn slash_to_command(sc: SlashCommand) -> Command {
SlashCommand::MeshRoute => Command::MeshRoute, SlashCommand::MeshRoute => Command::MeshRoute,
SlashCommand::MeshIdentity => Command::MeshIdentity, SlashCommand::MeshIdentity => Command::MeshIdentity,
SlashCommand::MeshStore => Command::MeshStore, SlashCommand::MeshStore => Command::MeshStore,
SlashCommand::MeshTrace { address } => Command::MeshTrace { address },
SlashCommand::MeshStats => Command::MeshStats,
SlashCommand::Verify { username } => Command::Verify { username }, SlashCommand::Verify { username } => Command::Verify { username },
SlashCommand::UpdateKey => Command::UpdateKey, SlashCommand::UpdateKey => Command::UpdateKey,
SlashCommand::Typing => Command::Typing, SlashCommand::Typing => Command::Typing,
@@ -415,6 +421,8 @@ async fn execute_slash(
SlashCommand::MeshRoute => cmd_mesh_route(session), SlashCommand::MeshRoute => cmd_mesh_route(session),
SlashCommand::MeshIdentity => cmd_mesh_identity(session), SlashCommand::MeshIdentity => cmd_mesh_identity(session),
SlashCommand::MeshStore => cmd_mesh_store(session), SlashCommand::MeshStore => cmd_mesh_store(session),
SlashCommand::MeshTrace { address } => cmd_mesh_trace(session, &address),
SlashCommand::MeshStats => cmd_mesh_stats(session),
SlashCommand::Verify { username } => cmd_verify(session, client, &username).await, SlashCommand::Verify { username } => cmd_verify(session, client, &username).await,
SlashCommand::UpdateKey => cmd_update_key(session, client).await, SlashCommand::UpdateKey => cmd_update_key(session, client).await,
SlashCommand::Typing => cmd_typing(session, client).await, SlashCommand::Typing => cmd_typing(session, client).await,

View File

@@ -434,6 +434,10 @@ impl PlaybookRunner {
"mesh-route" => Ok(Command::MeshRoute), "mesh-route" => Ok(Command::MeshRoute),
"mesh-identity" | "mesh-id" => Ok(Command::MeshIdentity), "mesh-identity" | "mesh-id" => Ok(Command::MeshIdentity),
"mesh-store" => Ok(Command::MeshStore), "mesh-store" => Ok(Command::MeshStore),
"mesh-trace" => Ok(Command::MeshTrace {
address: self.resolve_str(&step.args, "address")?,
}),
"mesh-stats" => Ok(Command::MeshStats),
other => bail!("unknown command: {other}"), other => bail!("unknown command: {other}"),
} }

View File

@@ -83,6 +83,8 @@ struct App {
channel_names: Vec<String>, channel_names: Vec<String>,
/// Conversation IDs, parallel to `channel_names`. /// Conversation IDs, parallel to `channel_names`.
channel_ids: Vec<ConversationId>, channel_ids: Vec<ConversationId>,
/// Unread message counts, parallel to `channel_names`.
unread_counts: Vec<u32>,
/// Index of the selected channel in the sidebar. /// Index of the selected channel in the sidebar.
selected_channel: usize, selected_channel: usize,
/// Messages for the currently active channel. /// Messages for the currently active channel.
@@ -102,10 +104,12 @@ impl App {
let convs = session.conv_store.list_conversations()?; let convs = session.conv_store.list_conversations()?;
let channel_names: Vec<String> = convs.iter().map(|c| c.display_name.clone()).collect(); let channel_names: Vec<String> = convs.iter().map(|c| c.display_name.clone()).collect();
let channel_ids: Vec<ConversationId> = convs.iter().map(|c| c.id.clone()).collect(); let channel_ids: Vec<ConversationId> = convs.iter().map(|c| c.id.clone()).collect();
let unread_counts: Vec<u32> = convs.iter().map(|c| c.unread_count).collect();
Ok(Self { Ok(Self {
channel_names, channel_names,
channel_ids, channel_ids,
unread_counts,
selected_channel: 0, selected_channel: 0,
messages: Vec::new(), messages: Vec::new(),
input: String::new(), input: String::new(),
@@ -232,14 +236,27 @@ fn draw_sidebar(frame: &mut Frame, app: &App, area: Rect) {
.iter() .iter()
.enumerate() .enumerate()
.map(|(i, name)| { .map(|(i, name)| {
let style = if i == app.selected_channel { let unread = app.unread_counts.get(i).copied().unwrap_or(0);
let is_selected = i == app.selected_channel;
let label = if unread > 0 && !is_selected {
format!("{name} ({unread})")
} else {
name.clone()
};
let style = if is_selected {
Style::default() Style::default()
.fg(Color::Cyan) .fg(Color::Cyan)
.add_modifier(Modifier::BOLD | Modifier::REVERSED) .add_modifier(Modifier::BOLD | Modifier::REVERSED)
} else if unread > 0 {
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD)
} else { } else {
Style::default().fg(Color::Cyan) Style::default().fg(Color::Cyan)
}; };
ListItem::new(Line::from(Span::styled(name.clone(), style))) ListItem::new(Line::from(Span::styled(label, style)))
}) })
.collect(); .collect();

View File

@@ -100,6 +100,8 @@ const COMMANDS: &[CmdDef] = &[
CmdDef { name: "/help", aliases: &["/?"], category: Category::Utility, description: "Show this help message", usage: "/help" }, CmdDef { name: "/help", aliases: &["/?"], category: Category::Utility, description: "Show this help message", usage: "/help" },
CmdDef { name: "/quit", aliases: &["/q", "/exit"], category: Category::Utility, description: "Exit the REPL", usage: "/quit" }, CmdDef { name: "/quit", aliases: &["/q", "/exit"], category: Category::Utility, description: "Exit the REPL", usage: "/quit" },
CmdDef { name: "/clear", aliases: &[], category: Category::Utility, description: "Clear the terminal", usage: "/clear" }, CmdDef { name: "/clear", aliases: &[], category: Category::Utility, description: "Clear the terminal", usage: "/clear" },
CmdDef { name: "/search", aliases: &[], category: Category::Messaging, description: "Search messages across all conversations", usage: "/search <query>" },
CmdDef { name: "/delete-conversation", aliases: &["/delconv"], category: Category::Messaging, description: "Delete a conversation and its messages", usage: "/delete-conversation [name]" },
CmdDef { name: "/health", aliases: &[], category: Category::Debug, description: "Check server connection health", usage: "/health" }, CmdDef { name: "/health", aliases: &[], category: Category::Debug, description: "Check server connection health", usage: "/health" },
CmdDef { name: "/status", aliases: &[], category: Category::Debug, description: "Show connection and auth state", usage: "/status" }, CmdDef { name: "/status", aliases: &[], category: Category::Debug, description: "Show connection and auth state", usage: "/status" },
]; ];
@@ -397,6 +399,8 @@ async fn dispatch(
"/switch" | "/sw" => do_switch(client, st, args)?, "/switch" | "/sw" => do_switch(client, st, args)?,
"/group" | "/g" => do_group(client, st, args).await?, "/group" | "/g" => do_group(client, st, args).await?,
"/devices" => do_devices(client, args).await?, "/devices" => do_devices(client, args).await?,
"/search" => do_search(client, args)?,
"/delete-conversation" | "/delconv" => do_delete_conversation(client, st, args)?,
_ => display::print_error(&format!("unknown command: {cmd} (try /help)")), _ => display::print_error(&format!("unknown command: {cmd} (try /help)")),
} }
Ok(false) Ok(false)
@@ -983,6 +987,81 @@ async fn do_devices(client: &mut QpqClient, args: &str) -> anyhow::Result<()> {
Ok(()) Ok(())
} }
// ── Search ──────────────────────────────────────────────────────────────────
fn do_search(client: &QpqClient, args: &str) -> anyhow::Result<()> {
let query = args.trim();
if query.is_empty() {
display::print_error("usage: /search <query>");
return Ok(());
}
let results = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.search_messages(query, 25)?;
if results.is_empty() {
display::print_status(&format!("no messages matching \"{query}\""));
return Ok(());
}
println!("\n{BOLD}Search results for \"{query}\"{RESET} ({} matches)\n", results.len());
for r in &results {
let ts = format_timestamp_ms(r.timestamp_ms);
let sender = r.sender_name.as_deref().unwrap_or("?");
println!(
" {DIM}[{ts}]{RESET} {CYAN}{}{RESET} > {GREEN}{sender}{RESET}: {}",
r.conversation_name,
r.body,
);
}
println!();
Ok(())
}
fn format_timestamp_ms(ms: u64) -> String {
let secs = ms / 1000;
let hours = (secs % 86400) / 3600;
let minutes = (secs % 3600) / 60;
format!("{hours:02}:{minutes:02}")
}
// ── Delete conversation ─────────────────────────────────────────────────────
fn do_delete_conversation(
client: &QpqClient,
st: &mut ReplState,
args: &str,
) -> anyhow::Result<()> {
let name = args.trim();
// Find by name, or use current conversation.
let target = if name.is_empty() {
st.current_conversation.clone()
} else {
let convs = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.list_conversations()?;
convs
.iter()
.find(|c| c.display_name.eq_ignore_ascii_case(name))
.map(|c| c.id.clone())
};
let Some(conv_id) = target else {
display::print_error("no matching conversation (specify name or switch first)");
return Ok(());
};
let deleted = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.delete_conversation(&conv_id)?;
if deleted {
// If we deleted the active conversation, clear it.
if st.current_conversation.as_ref() == Some(&conv_id) {
st.current_conversation = None;
st.current_display_name = None;
}
display::print_status("conversation deleted");
} else {
display::print_error("conversation not found");
}
Ok(())
}
// ── Entry point ───────────────────────────────────────────────────────────── // ── Entry point ─────────────────────────────────────────────────────────────
/// Run the v2 REPL over a `QpqClient`. /// Run the v2 REPL over a `QpqClient`.

View File

@@ -21,8 +21,7 @@
//! //!
//! Feature gate: requires both `v2` and `tui` features. //! Feature gate: requires both `v2` and `tui` features.
//! //!
//! **Note:** Message display is currently local-only. Use the REPL client for //! Messages are sent via the SDK's MLS encryption pipeline (sealed sender + hybrid wrap).
//! end-to-end encrypted delivery. See `quicprochat-sdk::messaging` for the full pipeline.
use std::time::Duration; use std::time::Duration;
@@ -41,8 +40,11 @@ use ratatui::{
}; };
use tokio::sync::broadcast; use tokio::sync::broadcast;
use std::sync::Arc;
use quicprochat_core::IdentityKeypair;
use quicprochat_sdk::client::{ConnectionState, QpqClient}; use quicprochat_sdk::client::{ConnectionState, QpqClient};
use quicprochat_sdk::conversation::ConversationStore; use quicprochat_sdk::conversation::{ConversationId, ConversationStore, StoredMessage};
use quicprochat_sdk::events::ClientEvent; use quicprochat_sdk::events::ClientEvent;
// ── Data Types ────────────────────────────────────────────────────────────── // ── Data Types ──────────────────────────────────────────────────────────────
@@ -91,6 +93,8 @@ pub struct TuiApp {
conn_state: quicprochat_sdk::client::ConnectionState, conn_state: quicprochat_sdk::client::ConnectionState,
/// Current MLS epoch for the active conversation (if available). /// Current MLS epoch for the active conversation (if available).
mls_epoch: Option<u64>, mls_epoch: Option<u64>,
/// Identity keypair for MLS operations (set after login).
identity: Option<Arc<IdentityKeypair>>,
} }
impl TuiApp { impl TuiApp {
@@ -110,6 +114,7 @@ impl TuiApp {
notification: None, notification: None,
conn_state: ConnectionState::Disconnected, conn_state: ConnectionState::Disconnected,
mls_epoch: None, mls_epoch: None,
identity: None,
} }
} }
@@ -573,14 +578,83 @@ async fn handle_input(app: &mut TuiApp, client: &mut QpqClient, text: &str) {
// Snap to bottom. // Snap to bottom.
app.scroll_offset = 0; app.scroll_offset = 0;
// NOTE: TUI message display is local-only. The full MLS encryption // Send via MLS encryption pipeline.
// pipeline (sealed sender + hybrid wrap + enqueue) is implemented in let conv_id_bytes = *app.active_conv_id().unwrap();
// quicprochat-sdk/src/messaging.rs but is not yet wired into the TUI. let conv_id = ConversationId(conv_id_bytes);
// Use the REPL client (`qpc repl`) for end-to-end message delivery.
app.notification = Some("Message queued locally (TUI send not yet wired to SDK)".to_string()); let send_result = send_tui_message(client, app, &conv_id, text).await;
match send_result {
Ok(()) => {
app.notification = Some("Sent".to_string());
}
Err(e) => {
app.notification = Some(format!("Send failed: {e}"));
}
}
} }
} }
/// Send a message via the SDK's MLS encryption pipeline.
async fn send_tui_message(
client: &QpqClient,
app: &TuiApp,
conv_id: &ConversationId,
text: &str,
) -> anyhow::Result<()> {
let identity = app
.identity
.as_ref()
.ok_or_else(|| anyhow::anyhow!("not logged in — identity not loaded"))?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv = conv_store
.load_conversation(conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, identity)?;
let my_pub = identity.public_key_bytes();
let recipients: Vec<Vec<u8>> = conv
.member_keys
.iter()
.filter(|k| k.as_slice() != my_pub.as_slice())
.cloned()
.collect();
if recipients.is_empty() {
return Err(anyhow::anyhow!("no recipients in conversation"));
}
let hybrid_keys = vec![None; recipients.len()];
quicprochat_sdk::messaging::send_message(
rpc,
&mut member,
identity,
text,
&recipients,
&hybrid_keys,
conv_id.0.as_slice(),
)
.await?;
quicprochat_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
let now = quicprochat_sdk::conversation::now_ms();
conv_store.save_message(&StoredMessage {
conversation_id: conv_id.clone(),
message_id: None,
sender_key: my_pub.to_vec(),
sender_name: client.username().map(|s| s.to_string()),
body: text.to_string(),
msg_type: "chat".to_string(),
ref_msg_id: None,
timestamp_ms: now,
is_outgoing: true,
})?;
Ok(())
}
/// Handle a /command. /// Handle a /command.
async fn handle_command(app: &mut TuiApp, client: &mut QpqClient, cmd: &str) { async fn handle_command(app: &mut TuiApp, client: &mut QpqClient, cmd: &str) {
let parts: Vec<&str> = cmd.splitn(3, ' ').collect(); let parts: Vec<&str> = cmd.splitn(3, ' ').collect();

View File

@@ -351,6 +351,25 @@ async fn connect_client(args: &Args) -> anyhow::Result<QpqClient> {
Ok(client) Ok(client)
} }
/// Connect and return client + identity keypair (needed for MLS one-shot commands).
async fn connect_with_identity(
args: &Args,
) -> anyhow::Result<(QpqClient, std::sync::Arc<quicprochat_core::IdentityKeypair>)> {
let client = connect_client(args).await?;
let keypair = if args.state.exists() {
let stored =
quicprochat_sdk::state::load_state(&args.state, args.db_password.as_deref())
.context("load identity state — register or login first")?;
std::sync::Arc::new(quicprochat_core::IdentityKeypair::from_seed(
stored.identity_seed,
))
} else {
anyhow::bail!("no state file found at {} — register or login first", args.state.display());
};
Ok((client, keypair))
}
// ── Entry point ────────────────────────────────────────────────────────────── // ── Entry point ──────────────────────────────────────────────────────────────
pub fn main() { pub fn main() {
@@ -446,34 +465,89 @@ async fn run(args: Args) -> anyhow::Result<()> {
} }
Cmd::Dm { ref username } => { Cmd::Dm { ref username } => {
let mut client = connect_client(&args).await?; let (client, identity) = connect_with_identity(&args).await?;
v2_commands::cmd_resolve(&mut client, username) let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
.await let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
.context("dm setup failed")?; let peer_key = quicprochat_sdk::users::resolve_user(rpc, username)
// For now, print the resolved key. Full DM creation requires .await?
// MLS group state, which will be handled in the REPL flow. .ok_or_else(|| anyhow::anyhow!("user '{username}' not found"))?;
println!("(DM creation with full MLS setup is available in the REPL)"); let key_package = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("no KeyPackage available for peer"))?;
let mut member = quicprochat_core::GroupMember::new(identity.clone());
let (conv_id, was_new) = quicprochat_sdk::groups::create_dm(
rpc, conv_store, &mut member, &identity,
&peer_key, &key_package, None, None,
).await?;
if was_new {
println!("DM with {username} created (id: {})", hex::encode(conv_id.0));
} else {
println!("DM with {username} resumed (id: {})", hex::encode(conv_id.0));
}
} }
Cmd::Send { ref to, ref msg } => { Cmd::Send { ref to, ref msg } => {
let _ = (to, msg); let (client, identity) = connect_with_identity(&args).await?;
let _client = connect_client(&args).await?; let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
// Full send requires MLS group state restoration — deferred to REPL. let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(to);
println!("(send is currently available in the REPL; one-shot send coming soon)"); let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation '{to}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let my_pub = identity.public_key_bytes();
let recipients: Vec<Vec<u8>> = conv
.member_keys
.iter()
.filter(|k| k.as_slice() != my_pub.as_slice())
.cloned()
.collect();
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let hybrid_keys = vec![None; recipients.len()];
quicprochat_sdk::messaging::send_message(
rpc, &mut member, &identity, msg, &recipients, &hybrid_keys, conv_id.0.as_slice(),
).await?;
quicprochat_sdk::groups::save_mls_state(conv_store, &conv_id, &member)?;
println!("sent to {to}");
} }
Cmd::Recv { ref from } => { Cmd::Recv { ref from } => {
let _ = from; let (client, identity) = connect_with_identity(&args).await?;
let _client = connect_client(&args).await?; let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
println!("(recv is currently available in the REPL; one-shot recv coming soon)"); let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(from);
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation '{from}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let my_key = identity.public_key_bytes();
let messages = quicprochat_sdk::messaging::receive_messages(
rpc, &mut member, my_key.as_slice(), None, conv_id.0.as_slice(), &[],
).await?;
quicprochat_sdk::groups::save_mls_state(conv_store, &conv_id, &member)?;
if messages.is_empty() {
println!("no new messages");
} else {
for msg in &messages {
let sender_short = hex::encode(&msg.sender_key[..4]);
let body = match &msg.message {
quicprochat_core::AppMessage::Chat { body, .. } => {
String::from_utf8_lossy(body).to_string()
}
other => format!("{other:?}"),
};
println!("[{sender_short}] {body}");
}
}
} }
Cmd::Group { Cmd::Group {
action: GroupCmd::Create { ref name }, action: GroupCmd::Create { ref name },
} => { } => {
let _ = name; let (_client, identity) = connect_with_identity(&args).await?;
let _client = connect_client(&args).await?; let conv_store = _client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
println!("(group create is currently available in the REPL; one-shot coming soon)"); let mut member = quicprochat_core::GroupMember::new(identity.clone());
let conv_id = quicprochat_sdk::groups::create_group(conv_store, &mut member, name)?;
println!("group '{name}' created (id: {})", hex::encode(conv_id.0));
} }
Cmd::Group { Cmd::Group {
@@ -483,9 +557,26 @@ async fn run(args: Args) -> anyhow::Result<()> {
ref user, ref user,
}, },
} => { } => {
let _ = (group, user); let (client, identity) = connect_with_identity(&args).await?;
let _client = connect_client(&args).await?; let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
println!("(group invite is currently available in the REPL; one-shot coming soon)"); let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(group);
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("group '{group}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
// Resolve peer identity key and fetch their KeyPackage.
let peer_key = quicprochat_sdk::users::resolve_user(rpc, user)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{user}' not found"))?;
let key_package = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("no KeyPackage available for peer"))?;
quicprochat_sdk::groups::invite_to_group(
rpc, conv_store, &mut member, &identity, &conv_id,
&peer_key, &key_package, None, None,
).await?;
println!("invited {user} to '{group}'");
} }
Cmd::Devices { Cmd::Devices {

View File

@@ -43,6 +43,7 @@ humantime-serde = "1"
[dev-dependencies] [dev-dependencies]
tempfile = "3" tempfile = "3"
meshservice = { path = "../meshservice" }
[[example]] [[example]]
name = "fapp_demo" name = "fapp_demo"

View File

@@ -35,6 +35,21 @@ pub const FAPP_WIRE_RESERVE: u8 = 0x04;
/// [`SlotConfirm`](crate::fapp::SlotConfirm) frame (handled later). /// [`SlotConfirm`](crate::fapp::SlotConfirm) frame (handled later).
pub const FAPP_WIRE_CONFIRM: u8 = 0x05; pub const FAPP_WIRE_CONFIRM: u8 = 0x05;
/// Check whether a raw payload starts with a known FAPP wire tag.
///
/// Useful for the mesh router to decide whether a delivered envelope should be
/// routed through the [`FappRouter`] rather than the application layer.
pub fn is_fapp_payload(payload: &[u8]) -> bool {
matches!(
payload.first(),
Some(&FAPP_WIRE_ANNOUNCE)
| Some(&FAPP_WIRE_QUERY)
| Some(&FAPP_WIRE_RESPONSE)
| Some(&FAPP_WIRE_RESERVE)
| Some(&FAPP_WIRE_CONFIRM)
)
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// FappAction — what to do after handling an incoming FAPP frame // FappAction — what to do after handling an incoming FAPP frame
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@@ -455,6 +470,24 @@ mod tests {
use crate::fapp::{Fachrichtung, Kostentraeger, Modalitaet, SlotType, TimeSlot}; use crate::fapp::{Fachrichtung, Kostentraeger, Modalitaet, SlotType, TimeSlot};
use crate::identity::MeshIdentity; use crate::identity::MeshIdentity;
#[test]
fn is_fapp_payload_recognizes_all_tags() {
assert!(is_fapp_payload(&[FAPP_WIRE_ANNOUNCE, 0x01]));
assert!(is_fapp_payload(&[FAPP_WIRE_QUERY, 0x01]));
assert!(is_fapp_payload(&[FAPP_WIRE_RESPONSE, 0x01]));
assert!(is_fapp_payload(&[FAPP_WIRE_RESERVE, 0x01]));
assert!(is_fapp_payload(&[FAPP_WIRE_CONFIRM, 0x01]));
}
#[test]
fn is_fapp_payload_rejects_non_fapp() {
assert!(!is_fapp_payload(&[]));
assert!(!is_fapp_payload(&[0x00]));
assert!(!is_fapp_payload(&[0x06]));
assert!(!is_fapp_payload(&[0x10])); // KeyPackageRequest tag
assert!(!is_fapp_payload(&[0xFF]));
}
#[test] #[test]
fn handle_incoming_unknown_tag_dropped() { fn handle_incoming_unknown_tag_dropped() {
let routes = Arc::new(RwLock::new(RoutingTable::new(Duration::from_secs(300)))); let routes = Arc::new(RwLock::new(RoutingTable::new(Duration::from_secs(300))));

View File

@@ -32,6 +32,7 @@ pub mod rate_limit;
pub mod shutdown; pub mod shutdown;
pub mod identity; pub mod identity;
pub mod link; pub mod link;
pub mod mesh_node;
pub mod mesh_router; pub mod mesh_router;
pub mod routing; pub mod routing;
pub mod routing_table; pub mod routing_table;
@@ -41,6 +42,8 @@ pub mod transport_iroh;
pub mod transport_manager; pub mod transport_manager;
pub mod transport_tcp; pub mod transport_tcp;
pub mod transport_lora; pub mod transport_lora;
pub mod observability;
pub mod viz_log;
#[cfg(feature = "traffic-resistance")] #[cfg(feature = "traffic-resistance")]
pub mod traffic_resistance; pub mod traffic_resistance;

View File

@@ -0,0 +1,831 @@
//! Production-ready mesh node integrating all subsystems.
//!
//! [`MeshNode`] combines:
//! - P2P transport (iroh QUIC)
//! - Mesh routing and store-and-forward
//! - FAPP (appointment discovery)
//! - Rate limiting and backpressure
//! - Metrics collection
//! - Graceful shutdown
//!
//! This is the main entry point for production deployments.
use std::net::SocketAddr;
use std::sync::atomic::AtomicBool;
use std::sync::{Arc, RwLock};
use std::time::Duration;
use iroh::{Endpoint, EndpointAddr, PublicKey, SecretKey};
use tokio::sync::{mpsc, watch};
use crate::address::MeshAddress;
use crate::announce_protocol::{self, AnnounceConfig as AnnounceProtoConfig, AnnounceDedup};
use crate::broadcast::BroadcastManager;
use crate::config::MeshConfig;
use crate::envelope::MeshEnvelope;
use crate::error::{MeshError, MeshResult};
use crate::fapp::{FappStore, CAP_FAPP_PATIENT, CAP_FAPP_RELAY, CAP_FAPP_THERAPIST};
use crate::fapp_router::{is_fapp_payload, FappRouter};
use crate::identity::MeshIdentity;
use crate::mesh_router::{IncomingAction, MeshRouter};
use crate::metrics::{self, MeshMetrics};
use crate::observability::{HealthServer, NodeHealth};
use crate::rate_limit::{BackpressureController, RateLimiter};
use crate::routing_table::RoutingTable;
use crate::shutdown::{ShutdownCoordinator, ShutdownSignal, ShutdownTrigger};
use crate::store::MeshStore;
use crate::transport::TransportAddr;
use crate::transport_manager::TransportManager;
/// ALPN for mesh protocol.
const MESH_ALPN: &[u8] = b"quicprochat/mesh/1";
/// Production mesh node with all subsystems integrated.
pub struct MeshNode {
/// Node configuration.
config: MeshConfig,
/// iroh endpoint for QUIC transport.
endpoint: Endpoint,
/// Mesh identity (Ed25519 keypair).
identity: MeshIdentity,
/// Mesh address (truncated from identity).
address: MeshAddress,
/// Routing table for mesh forwarding.
routing_table: Arc<RwLock<RoutingTable>>,
/// Store-and-forward message queue.
mesh_store: Arc<std::sync::Mutex<MeshStore>>,
/// Broadcast channel manager.
broadcast_mgr: Arc<std::sync::Mutex<BroadcastManager>>,
/// Multi-transport manager.
transport_manager: Arc<TransportManager>,
/// Mesh router for envelope handling.
mesh_router: Arc<MeshRouter>,
/// FAPP router (optional, based on capabilities).
fapp_router: Option<Arc<FappRouter>>,
/// Rate limiter for DoS protection.
rate_limiter: Arc<RateLimiter>,
/// Backpressure controller.
backpressure: Arc<BackpressureController>,
/// Metrics collector.
metrics: Arc<MeshMetrics>,
/// Shutdown coordinator.
shutdown: Arc<ShutdownCoordinator>,
/// Shutdown trigger (clone for external use).
shutdown_trigger: ShutdownTrigger,
/// Whether the node is draining (shutting down).
draining: Arc<AtomicBool>,
/// Health/metrics HTTP listen address (if configured).
health_listen: Option<SocketAddr>,
}
/// Builder for MeshNode with sensible defaults.
pub struct MeshNodeBuilder {
config: MeshConfig,
identity: Option<MeshIdentity>,
secret_key: Option<SecretKey>,
fapp_capabilities: u16,
health_listen: Option<SocketAddr>,
}
impl MeshNodeBuilder {
pub fn new() -> Self {
Self {
config: MeshConfig::default(),
identity: None,
secret_key: None,
fapp_capabilities: 0,
health_listen: None,
}
}
/// Use a specific configuration.
pub fn config(mut self, config: MeshConfig) -> Self {
self.config = config;
self
}
/// Use existing mesh identity.
pub fn identity(mut self, identity: MeshIdentity) -> Self {
self.identity = Some(identity);
self
}
/// Use existing iroh secret key.
pub fn secret_key(mut self, key: SecretKey) -> Self {
self.secret_key = Some(key);
self
}
/// Enable FAPP therapist capabilities.
pub fn fapp_therapist(mut self) -> Self {
self.fapp_capabilities |= CAP_FAPP_THERAPIST;
self
}
/// Enable FAPP relay capabilities.
pub fn fapp_relay(mut self) -> Self {
self.fapp_capabilities |= CAP_FAPP_RELAY;
self
}
/// Enable FAPP patient capabilities.
pub fn fapp_patient(mut self) -> Self {
self.fapp_capabilities |= CAP_FAPP_PATIENT;
self
}
/// Enable health/metrics HTTP endpoint on the given address.
pub fn health_listen(mut self, addr: SocketAddr) -> Self {
self.health_listen = Some(addr);
self
}
/// Build and start the mesh node.
pub async fn build(self) -> MeshResult<MeshNode> {
MeshNode::start(
self.config,
self.identity,
self.secret_key,
self.fapp_capabilities,
self.health_listen,
)
.await
}
}
impl Default for MeshNodeBuilder {
fn default() -> Self {
Self::new()
}
}
impl MeshNode {
/// Start a new mesh node with full configuration.
pub async fn start(
config: MeshConfig,
identity: Option<MeshIdentity>,
secret_key: Option<SecretKey>,
fapp_capabilities: u16,
health_listen: Option<SocketAddr>,
) -> MeshResult<Self> {
// Initialize metrics
let metrics = Arc::new(MeshMetrics::new());
// Create identity
let identity = identity.unwrap_or_else(MeshIdentity::generate);
let address = MeshAddress::from_public_key(&identity.public_key());
// Build iroh endpoint
let mut builder = Endpoint::builder();
if let Some(sk) = secret_key {
builder = builder.secret_key(sk);
}
builder = builder.alpns(vec![MESH_ALPN.to_vec()]);
let endpoint = builder.bind().await.map_err(|e| {
MeshError::Internal(format!("failed to bind endpoint: {}", e))
})?;
tracing::info!(
node_id = %endpoint.id().fmt_short(),
mesh_addr = %address,
"Mesh node starting"
);
// Create routing table
let routing_table = Arc::new(RwLock::new(RoutingTable::new(
config.routing.default_ttl,
)));
// Create stores
let mesh_store = Arc::new(std::sync::Mutex::new(MeshStore::new(
config.store.max_messages,
)));
let broadcast_mgr = Arc::new(std::sync::Mutex::new(BroadcastManager::new()));
// Create transport manager
let transport_manager = Arc::new(TransportManager::new());
// Create mesh router (needs its own identity copy)
let router_identity = MeshIdentity::from_seed(identity.seed_bytes());
let mesh_router = Arc::new(MeshRouter::new(
router_identity,
Arc::clone(&routing_table),
Arc::clone(&transport_manager),
Arc::clone(&mesh_store),
));
// Create FAPP router if capabilities are set
let fapp_router = if fapp_capabilities != 0 {
Some(Arc::new(FappRouter::new(
FappStore::new(),
Arc::clone(&routing_table),
Arc::clone(&transport_manager),
fapp_capabilities,
)))
} else {
None
};
// Create rate limiter
let rate_limiter = Arc::new(RateLimiter::new(config.rate_limit.clone()));
// Create backpressure controller
let backpressure = Arc::new(BackpressureController::default_for_standard());
// Create shutdown coordinator
let shutdown = Arc::new(ShutdownCoordinator::new());
let (shutdown_trigger, _shutdown_signal) = ShutdownSignal::new();
let draining = Arc::new(AtomicBool::new(false));
let node = Self {
config,
endpoint,
identity,
address,
routing_table,
mesh_store,
broadcast_mgr,
transport_manager,
mesh_router,
fapp_router,
rate_limiter,
backpressure,
metrics,
shutdown,
shutdown_trigger,
draining,
health_listen,
};
tracing::info!(
mesh_addr = %node.address,
fapp = fapp_capabilities != 0,
health = ?node.health_listen,
"Mesh node started"
);
Ok(node)
}
/// Get the node's mesh address.
pub fn address(&self) -> MeshAddress {
self.address
}
/// Get the node's iroh public key.
pub fn node_id(&self) -> PublicKey {
self.endpoint.id()
}
/// Get the node's endpoint address for sharing.
pub fn endpoint_addr(&self) -> EndpointAddr {
self.endpoint.addr()
}
/// Get a reference to the mesh identity.
pub fn identity(&self) -> &MeshIdentity {
&self.identity
}
/// Get a reference to the configuration.
pub fn config(&self) -> &MeshConfig {
&self.config
}
/// Get a reference to the metrics.
pub fn metrics(&self) -> &Arc<MeshMetrics> {
&self.metrics
}
/// Get a reference to the mesh router.
pub fn mesh_router(&self) -> &Arc<MeshRouter> {
&self.mesh_router
}
/// Get a reference to the FAPP router, if enabled.
pub fn fapp_router(&self) -> Option<&Arc<FappRouter>> {
self.fapp_router.as_ref()
}
/// Get a reference to the routing table.
pub fn routing_table(&self) -> &Arc<RwLock<RoutingTable>> {
&self.routing_table
}
/// Get a reference to the transport manager.
pub fn transport_manager(&self) -> &Arc<TransportManager> {
&self.transport_manager
}
/// Get a clone of the shutdown trigger.
pub fn shutdown_trigger(&self) -> ShutdownTrigger {
self.shutdown_trigger.clone()
}
/// Whether the node is currently draining (shutting down).
pub fn is_draining(&self) -> bool {
self.draining.load(std::sync::atomic::Ordering::Relaxed)
}
/// Get a snapshot of the current node health.
pub fn health(&self) -> NodeHealth {
let snapshot = self.metrics.snapshot();
NodeHealth::from_snapshot(&snapshot, self.is_draining())
}
/// Send a mesh envelope to a peer.
#[tracing::instrument(skip(self, envelope), fields(dest = %dest, payload_len = envelope.payload.len()))]
pub async fn send(&self, dest: &TransportAddr, envelope: &MeshEnvelope) -> MeshResult<()> {
let wire = envelope.to_wire();
self.metrics.transport("mesh").sent.inc();
self.metrics.transport("mesh").bytes_sent.inc_by(wire.len() as u64);
self.transport_manager
.send(dest, &wire)
.await
.map_err(|e| MeshError::Internal(e.to_string()))
}
/// Process an incoming envelope with rate limiting and metrics.
#[tracing::instrument(skip(self, envelope), fields(sender = %sender, payload_len = envelope.payload.len()))]
pub fn process_incoming(&self, sender: &MeshAddress, envelope: MeshEnvelope) -> MeshResult<IncomingAction> {
// Rate limiting check
let rate_result = self.rate_limiter.check_message(sender)?;
if !rate_result.is_allowed() {
self.metrics.protocol.oversized.inc();
return Ok(IncomingAction::Dropped("rate limited".into()));
}
// Backpressure check
let _bp_level = self.backpressure.level();
// For now, we process all messages regardless of backpressure
// In production, we'd check message priority
// Update metrics
self.metrics.transport("mesh").received.inc();
self.metrics.transport("mesh").bytes_received.inc_by(envelope.payload.len() as u64);
// Delegate to mesh router
let action = self.mesh_router.handle_incoming(envelope)
.map_err(|e| MeshError::Internal(e.to_string()))?;
// If the envelope is delivered locally and its payload is a FAPP frame,
// delegate to the FappRouter instead of returning a raw Deliver.
let action = match action {
IncomingAction::Deliver(ref env) if self.fapp_router.is_some() && is_fapp_payload(&env.payload) => {
let fapp_router = self.fapp_router.as_ref().unwrap();
let fapp_action = fapp_router.handle_incoming(&env.payload);
IncomingAction::Fapp(fapp_action)
}
other => other,
};
// Update routing metrics based on action
match &action {
IncomingAction::Deliver(_) => {
self.metrics.store.messages_delivered.inc();
}
IncomingAction::Forward {
envelope: _,
next_hop,
} => {
self.metrics.routing.announcements_forwarded.inc();
let from = format!("{sender}");
let to = next_hop.to_string();
crate::viz_log::log_forward_hop(&from, &to, 0);
}
IncomingAction::Store(_) => {
self.metrics.store.messages_stored.inc();
}
IncomingAction::Dropped(_) => {
self.metrics.protocol.parse_errors.inc();
}
IncomingAction::Fapp(_) => {
self.metrics.store.messages_delivered.inc();
}
}
Ok(action)
}
/// Parse and process raw incoming bytes.
pub fn process_incoming_bytes(&self, sender: &MeshAddress, data: &[u8]) -> MeshResult<IncomingAction> {
let envelope = MeshEnvelope::from_wire(data)
.map_err(|e| MeshError::Protocol(crate::error::ProtocolError::InvalidFormat(e.to_string())))?;
self.process_incoming(sender, envelope)
}
/// Store a message for offline delivery.
pub fn store_for_delivery(&self, envelope: MeshEnvelope) -> MeshResult<bool> {
let mut store = self.mesh_store.lock().map_err(|e| {
MeshError::Internal(format!("mesh store lock poisoned: {}", e))
})?;
let stored = store.store(envelope);
if stored {
self.metrics.store.messages_stored.inc();
self.metrics.store.current_size.set(store.stats().0 as u64);
}
Ok(stored)
}
/// Fetch stored messages for a recipient.
pub fn fetch_stored(&self, recipient: &[u8]) -> MeshResult<Vec<MeshEnvelope>> {
let mut store = self.mesh_store.lock().map_err(|e| {
MeshError::Internal(format!("mesh store lock poisoned: {}", e))
})?;
let messages = store.fetch(recipient);
self.metrics.store.current_size.set(store.stats().0 as u64);
Ok(messages)
}
/// Run garbage collection on stores.
pub fn gc(&self) -> MeshResult<GcStats> {
let mut stats = GcStats::default();
// GC mesh store
{
let mut store = self.mesh_store.lock().map_err(|e| {
MeshError::Internal(format!("mesh store lock: {}", e))
})?;
stats.messages_expired = store.gc_expired();
self.metrics.store.messages_expired.inc_by(stats.messages_expired as u64);
}
// GC routing table
{
let mut table = self.routing_table.write().map_err(|e| {
MeshError::Internal(format!("routing table lock: {}", e))
})?;
stats.routes_expired = table.remove_expired();
self.metrics.routing.routes_expired.inc_by(stats.routes_expired as u64);
}
// GC rate limiter (remove idle peers)
stats.rate_limiters_cleaned = self.rate_limiter.cleanup(Duration::from_secs(3600));
tracing::debug!(
messages = stats.messages_expired,
routes = stats.routes_expired,
rate_limiters = stats.rate_limiters_cleaned,
"GC completed"
);
Ok(stats)
}
/// Run the mesh node event loop with background tasks.
///
/// Starts:
/// - Periodic garbage collection (routing table, store, rate limiters)
/// - Health/metrics HTTP server (if `health_listen` is configured)
///
/// Returns a [`RunHandle`] that can be used to await shutdown or trigger it.
pub async fn run(self) -> MeshResult<RunHandle> {
let (shutdown_tx, shutdown_rx) = watch::channel(false);
// Start health server if configured.
let health_addr = if let Some(addr) = self.health_listen {
let server = HealthServer::new(
Arc::clone(&self.metrics),
Arc::clone(&self.draining),
);
match server.serve(addr, shutdown_rx.clone()).await {
Ok(bound) => Some(bound),
Err(e) => {
tracing::warn!(error = %e, "failed to start health server");
None
}
}
} else {
None
};
// Spawn GC task.
let gc_metrics = Arc::clone(&self.metrics);
let gc_store = Arc::clone(&self.mesh_store);
let gc_routing = Arc::clone(&self.routing_table);
let gc_rate_limiter = Arc::clone(&self.rate_limiter);
let gc_interval = self.config.routing.gc_interval;
let mut gc_shutdown = shutdown_rx.clone();
tokio::spawn(async move {
let mut interval = tokio::time::interval(gc_interval);
interval.tick().await; // skip immediate first tick
loop {
tokio::select! {
biased;
_ = gc_shutdown.changed() => break,
_ = interval.tick() => {
let _span = tracing::info_span!("mesh_gc").entered();
let mut expired_messages = 0usize;
let mut expired_routes = 0usize;
let mut cleaned_limiters = 0usize;
// GC store.
if let Ok(mut store) = gc_store.lock() {
expired_messages = store.gc_expired();
gc_metrics.store.messages_expired.inc_by(expired_messages as u64);
}
// GC routing table.
if let Ok(mut table) = gc_routing.write() {
expired_routes = table.remove_expired();
gc_metrics.routing.routes_expired.inc_by(expired_routes as u64);
}
// GC rate limiters.
cleaned_limiters = gc_rate_limiter.cleanup(Duration::from_secs(3600));
if expired_messages > 0 || expired_routes > 0 || cleaned_limiters > 0 {
tracing::debug!(
messages = expired_messages,
routes = expired_routes,
rate_limiters = cleaned_limiters,
"GC cycle completed"
);
}
}
}
}
});
tracing::info!(
mesh_addr = %self.address,
health = ?health_addr,
"Mesh node running"
);
Ok(RunHandle {
node: self,
shutdown_tx,
health_addr,
})
}
/// Gracefully shut down the node.
pub async fn shutdown(self) {
tracing::info!("Mesh node shutting down");
// Mark as draining for health checks.
self.draining.store(true, std::sync::atomic::Ordering::Relaxed);
// Trigger shutdown
self.shutdown_trigger.trigger();
// Run shutdown coordinator
self.shutdown.shutdown().await;
// Close transports
let _ = self.transport_manager.close_all().await;
// Close iroh endpoint
self.endpoint.close().await;
tracing::info!("Mesh node shutdown complete");
}
}
/// Statistics from garbage collection.
#[derive(Debug, Default)]
pub struct GcStats {
pub messages_expired: usize,
pub routes_expired: usize,
pub rate_limiters_cleaned: usize,
}
/// Handle for a running mesh node.
///
/// Provides access to the node and controls for shutdown.
pub struct RunHandle {
/// The running mesh node.
node: MeshNode,
/// Shutdown sender — drop or send to stop background tasks.
shutdown_tx: watch::Sender<bool>,
/// Bound health server address (if started).
health_addr: Option<SocketAddr>,
}
impl RunHandle {
/// Get a reference to the running mesh node.
pub fn node(&self) -> &MeshNode {
&self.node
}
/// Get the health server's bound address, if running.
pub fn health_addr(&self) -> Option<SocketAddr> {
self.health_addr
}
/// Trigger graceful shutdown and wait for completion.
pub async fn shutdown(self) {
// Signal background tasks to stop.
let _ = self.shutdown_tx.send(true);
// Run node shutdown (drains transports, etc.).
self.node.shutdown().await;
}
/// Get a snapshot of current node health.
pub fn health(&self) -> NodeHealth {
self.node.health()
}
/// Get a snapshot of current metrics.
pub fn metrics_snapshot(&self) -> crate::metrics::MetricsSnapshot {
self.node.metrics().snapshot()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::envelope::MeshEnvelope;
use crate::fapp_router::{FappAction, FAPP_WIRE_QUERY, FAPP_WIRE_ANNOUNCE};
#[tokio::test]
async fn mesh_node_starts() {
let node = MeshNodeBuilder::new()
.build()
.await
.expect("build node");
assert!(!node.address().is_broadcast());
assert!(node.fapp_router().is_none());
node.shutdown().await;
}
#[tokio::test]
async fn mesh_node_with_fapp() {
let node = MeshNodeBuilder::new()
.fapp_relay()
.fapp_patient()
.build()
.await
.expect("build node");
assert!(node.fapp_router().is_some());
node.shutdown().await;
}
#[tokio::test]
async fn mesh_node_metrics() {
let node = MeshNodeBuilder::new()
.build()
.await
.expect("build node");
// Check metrics are accessible
let snapshot = node.metrics().snapshot();
assert!(snapshot.uptime_secs < 5);
node.shutdown().await;
}
#[tokio::test]
async fn mesh_node_gc() {
let node = MeshNodeBuilder::new()
.build()
.await
.expect("build node");
let stats = node.gc().expect("gc");
assert_eq!(stats.messages_expired, 0);
assert_eq!(stats.routes_expired, 0);
node.shutdown().await;
}
#[tokio::test]
async fn mesh_node_with_identity() {
let identity = MeshIdentity::generate();
let pk = identity.public_key();
let node = MeshNodeBuilder::new()
.identity(identity)
.build()
.await
.expect("build node");
assert_eq!(node.identity().public_key(), pk);
node.shutdown().await;
}
#[tokio::test]
async fn fapp_payload_routed_to_fapp_router() {
let identity = MeshIdentity::generate();
let node_pk = identity.public_key();
let node = MeshNodeBuilder::new()
.identity(identity)
.fapp_relay()
.build()
.await
.expect("build fapp node");
// Build a FAPP query payload (tag 0x02 + CBOR body).
let query = crate::fapp::SlotQuery {
query_id: [0xAA; 16],
fachrichtung: None,
modalitaet: None,
kostentraeger: None,
plz_prefix: None,
earliest: None,
latest: None,
slot_type: None,
max_results: 5,
};
let mut fapp_payload = vec![FAPP_WIRE_QUERY];
ciborium::into_writer(&query, &mut fapp_payload).expect("CBOR encode");
// Wrap in a MeshEnvelope addressed to this node.
let sender = MeshIdentity::generate();
let envelope = MeshEnvelope::new(&sender, &node_pk, fapp_payload, 3600, 5);
let sender_addr = MeshAddress::from_public_key(&sender.public_key());
let action = node.process_incoming(&sender_addr, envelope).expect("process");
match action {
IncomingAction::Fapp(FappAction::QueryResponse(resp)) => {
// Relay answers from its (empty) store — expect zero matches.
assert!(resp.matches.is_empty());
}
other => panic!("expected Fapp(QueryResponse), got {:?}", std::mem::discriminant(&other)),
}
node.shutdown().await;
}
#[tokio::test]
async fn non_fapp_payload_delivered_normally() {
let identity = MeshIdentity::generate();
let node_pk = identity.public_key();
let node = MeshNodeBuilder::new()
.identity(identity)
.fapp_relay()
.build()
.await
.expect("build fapp node");
// A regular (non-FAPP) payload — first byte 0xFF is not a FAPP tag.
let regular_payload = vec![0xFF, 0x01, 0x02, 0x03];
let sender = MeshIdentity::generate();
let envelope = MeshEnvelope::new(&sender, &node_pk, regular_payload.clone(), 3600, 5);
let sender_addr = MeshAddress::from_public_key(&sender.public_key());
let action = node.process_incoming(&sender_addr, envelope).expect("process");
match action {
IncomingAction::Deliver(env) => {
assert_eq!(env.payload, regular_payload);
}
other => panic!("expected Deliver, got {:?}", std::mem::discriminant(&other)),
}
node.shutdown().await;
}
#[tokio::test]
async fn fapp_payload_without_fapp_router_delivered_normally() {
let identity = MeshIdentity::generate();
let node_pk = identity.public_key();
// No FAPP capabilities — fapp_router is None.
let node = MeshNodeBuilder::new()
.identity(identity)
.build()
.await
.expect("build node");
assert!(node.fapp_router().is_none());
// Even though the payload has a FAPP tag, without a FappRouter it should
// be delivered as a normal message.
let fapp_payload = vec![FAPP_WIRE_ANNOUNCE, 0x01, 0x02];
let sender = MeshIdentity::generate();
let envelope = MeshEnvelope::new(&sender, &node_pk, fapp_payload.clone(), 3600, 5);
let sender_addr = MeshAddress::from_public_key(&sender.public_key());
let action = node.process_incoming(&sender_addr, envelope).expect("process");
match action {
IncomingAction::Deliver(env) => {
assert_eq!(env.payload, fapp_payload);
}
other => panic!("expected Deliver, got {:?}", std::mem::discriminant(&other)),
}
node.shutdown().await;
}
}

View File

@@ -21,6 +21,7 @@ use anyhow::{bail, Result};
use crate::announce::compute_address; use crate::announce::compute_address;
use crate::envelope::MeshEnvelope; use crate::envelope::MeshEnvelope;
use crate::fapp_router::FappAction;
use crate::identity::MeshIdentity; use crate::identity::MeshIdentity;
use crate::routing_table::RoutingTable; use crate::routing_table::RoutingTable;
use crate::store::MeshStore; use crate::store::MeshStore;
@@ -54,6 +55,8 @@ pub enum IncomingAction {
Store(MeshEnvelope), Store(MeshEnvelope),
/// Message was dropped (expired, max hops, invalid). /// Message was dropped (expired, max hops, invalid).
Dropped(String), Dropped(String),
/// FAPP protocol message — handled by [`FappRouter`](crate::fapp_router::FappRouter).
Fapp(FappAction),
} }
/// Per-destination delivery statistics. /// Per-destination delivery statistics.

View File

@@ -0,0 +1,381 @@
//! Observability for mesh nodes: health checks, metrics export, and tracing helpers.
//!
//! Provides:
//! - [`NodeHealth`] — structured health status for the mesh node
//! - [`HealthServer`] — lightweight HTTP server for `/healthz` and `/metricsz`
//! - Prometheus text format export from [`MeshMetrics`]
use std::collections::HashMap;
use std::io::Write as IoWrite;
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpListener;
use crate::metrics::{MeshMetrics, MetricsSnapshot};
/// Node health status.
#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize)]
#[serde(rename_all = "lowercase")]
pub enum HealthStatus {
/// Node is healthy and accepting traffic.
Healthy,
/// Node is degraded but still operational.
Degraded,
/// Node is shutting down (draining connections).
Draining,
/// Node is unhealthy.
Unhealthy,
}
impl std::fmt::Display for HealthStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Healthy => write!(f, "healthy"),
Self::Degraded => write!(f, "degraded"),
Self::Draining => write!(f, "draining"),
Self::Unhealthy => write!(f, "unhealthy"),
}
}
}
/// Structured health check response.
#[derive(Debug, Clone, serde::Serialize)]
pub struct NodeHealth {
/// Overall node status.
pub status: HealthStatus,
/// Node uptime in seconds.
pub uptime_secs: u64,
/// Number of active transport connections.
pub connections: u64,
/// Routing table size.
pub routing_table_size: u64,
/// Store queue depth.
pub store_size: u64,
/// Messages processed since start.
pub messages_processed: u64,
/// Individual subsystem checks.
pub checks: HashMap<String, SubsystemHealth>,
}
/// Per-subsystem health.
#[derive(Debug, Clone, serde::Serialize)]
pub struct SubsystemHealth {
pub status: HealthStatus,
pub message: String,
}
impl NodeHealth {
/// Build a health check from a metrics snapshot and node state.
pub fn from_snapshot(snapshot: &MetricsSnapshot, is_draining: bool) -> Self {
let mut checks = HashMap::new();
// Transport health: degraded if error rate > 10%.
let total_sent: u64 = snapshot.transports.values().map(|t| t.sent).sum();
let total_errors: u64 = snapshot.transports.values().map(|t| t.send_errors).sum();
let transport_status = if is_draining {
HealthStatus::Draining
} else if total_sent > 0 && total_errors * 10 > total_sent {
HealthStatus::Degraded
} else {
HealthStatus::Healthy
};
checks.insert(
"transport".to_string(),
SubsystemHealth {
status: transport_status,
message: format!(
"sent={}, errors={}, connections={}",
total_sent,
total_errors,
snapshot.transports.values().map(|t| t.connections).sum::<u64>(),
),
},
);
// Routing health.
let routing_status = HealthStatus::Healthy;
checks.insert(
"routing".to_string(),
SubsystemHealth {
status: routing_status,
message: format!(
"table_size={}, lookups={}, misses={}",
snapshot.routing.table_size,
snapshot.routing.lookups,
snapshot.routing.lookup_misses,
),
},
);
// Store health.
checks.insert(
"store".to_string(),
SubsystemHealth {
status: HealthStatus::Healthy,
message: format!(
"stored={}, delivered={}, expired={}, current={}",
snapshot.store.messages_stored,
snapshot.store.messages_delivered,
snapshot.store.messages_expired,
snapshot.store.current_size,
),
},
);
// Overall status: worst of all subsystems.
let overall = if is_draining {
HealthStatus::Draining
} else if checks.values().any(|c| c.status == HealthStatus::Unhealthy) {
HealthStatus::Unhealthy
} else if checks.values().any(|c| c.status == HealthStatus::Degraded) {
HealthStatus::Degraded
} else {
HealthStatus::Healthy
};
let connections = snapshot.transports.values().map(|t| t.connections).sum();
let messages_processed: u64 = snapshot.transports.values().map(|t| t.received).sum();
Self {
status: overall,
uptime_secs: snapshot.uptime_secs,
connections,
routing_table_size: snapshot.routing.table_size,
store_size: snapshot.store.current_size,
messages_processed,
checks,
}
}
/// HTTP status code for this health status.
pub fn http_status_code(&self) -> u16 {
match self.status {
HealthStatus::Healthy => 200,
HealthStatus::Degraded => 200,
HealthStatus::Draining => 503,
HealthStatus::Unhealthy => 503,
}
}
}
/// Render a [`MetricsSnapshot`] in Prometheus text exposition format.
pub fn prometheus_text(snapshot: &MetricsSnapshot) -> String {
let mut buf = Vec::with_capacity(2048);
// Uptime.
writeln!(buf, "# HELP mesh_uptime_seconds Node uptime in seconds.").ok();
writeln!(buf, "# TYPE mesh_uptime_seconds gauge").ok();
writeln!(buf, "mesh_uptime_seconds {}", snapshot.uptime_secs).ok();
// Transport metrics.
for (name, t) in &snapshot.transports {
writeln!(buf, "# HELP mesh_transport_sent_total Messages sent via transport.").ok();
writeln!(buf, "# TYPE mesh_transport_sent_total counter").ok();
writeln!(buf, "mesh_transport_sent_total{{transport=\"{}\"}} {}", name, t.sent).ok();
writeln!(buf, "mesh_transport_received_total{{transport=\"{}\"}} {}", name, t.received).ok();
writeln!(buf, "mesh_transport_send_errors_total{{transport=\"{}\"}} {}", name, t.send_errors).ok();
writeln!(buf, "mesh_transport_bytes_sent_total{{transport=\"{}\"}} {}", name, t.bytes_sent).ok();
writeln!(buf, "mesh_transport_bytes_received_total{{transport=\"{}\"}} {}", name, t.bytes_received).ok();
writeln!(buf, "# HELP mesh_transport_connections Active connections.").ok();
writeln!(buf, "# TYPE mesh_transport_connections gauge").ok();
writeln!(buf, "mesh_transport_connections{{transport=\"{}\"}} {}", name, t.connections).ok();
}
// Routing metrics.
writeln!(buf, "# HELP mesh_routing_table_size Current routing table entries.").ok();
writeln!(buf, "# TYPE mesh_routing_table_size gauge").ok();
writeln!(buf, "mesh_routing_table_size {}", snapshot.routing.table_size).ok();
writeln!(buf, "mesh_routing_lookups_total {}", snapshot.routing.lookups).ok();
writeln!(buf, "mesh_routing_lookup_misses_total {}", snapshot.routing.lookup_misses).ok();
writeln!(buf, "mesh_routing_announcements_processed_total {}", snapshot.routing.announcements_processed).ok();
// Store metrics.
writeln!(buf, "# HELP mesh_store_current_size Current messages in store.").ok();
writeln!(buf, "# TYPE mesh_store_current_size gauge").ok();
writeln!(buf, "mesh_store_current_size {}", snapshot.store.current_size).ok();
writeln!(buf, "mesh_store_messages_stored_total {}", snapshot.store.messages_stored).ok();
writeln!(buf, "mesh_store_messages_delivered_total {}", snapshot.store.messages_delivered).ok();
writeln!(buf, "mesh_store_messages_expired_total {}", snapshot.store.messages_expired).ok();
// Crypto metrics.
writeln!(buf, "mesh_crypto_encryptions_total {}", snapshot.crypto.encryptions).ok();
writeln!(buf, "mesh_crypto_decryptions_total {}", snapshot.crypto.decryptions).ok();
writeln!(buf, "mesh_crypto_signature_verifications_total {}", snapshot.crypto.signature_verifications).ok();
writeln!(buf, "mesh_crypto_signature_failures_total {}", snapshot.crypto.signature_failures).ok();
writeln!(buf, "mesh_crypto_replay_detections_total {}", snapshot.crypto.replay_detections).ok();
String::from_utf8(buf).unwrap_or_default()
}
/// Lightweight HTTP health/metrics server for the mesh node.
///
/// Serves:
/// - `GET /healthz` — JSON health check
/// - `GET /metricsz` — Prometheus text format metrics
///
/// Uses raw TCP + minimal HTTP parsing to avoid adding heavy dependencies
/// (no axum/hyper/warp needed).
pub struct HealthServer {
metrics: Arc<MeshMetrics>,
draining: Arc<std::sync::atomic::AtomicBool>,
}
impl HealthServer {
/// Create a new health server backed by the given metrics.
pub fn new(metrics: Arc<MeshMetrics>, draining: Arc<std::sync::atomic::AtomicBool>) -> Self {
Self { metrics, draining }
}
/// Start serving on the given address. Returns when the listener is bound.
///
/// The server runs as a background tokio task and stops when dropped or
/// when the `shutdown` future completes.
pub async fn serve(
self,
addr: SocketAddr,
mut shutdown: tokio::sync::watch::Receiver<bool>,
) -> Result<SocketAddr, std::io::Error> {
let listener = TcpListener::bind(addr).await?;
let bound = listener.local_addr()?;
tracing::info!(addr = %bound, "health/metrics server listening");
let metrics = self.metrics;
let draining = self.draining;
tokio::spawn(async move {
loop {
tokio::select! {
biased;
_ = shutdown.changed() => {
tracing::debug!("health server shutting down");
break;
}
accept = listener.accept() => {
match accept {
Ok((mut stream, _peer)) => {
let metrics = Arc::clone(&metrics);
let is_draining = draining.load(std::sync::atomic::Ordering::Relaxed);
tokio::spawn(async move {
// Read the request (up to 4KB — we only need the path).
let mut buf = [0u8; 4096];
let n = match tokio::io::AsyncReadExt::read(&mut stream, &mut buf).await {
Ok(n) => n,
Err(_) => return,
};
let request = String::from_utf8_lossy(&buf[..n]);
// Minimal HTTP path extraction.
let path = request
.lines()
.next()
.and_then(|line| line.split_whitespace().nth(1))
.unwrap_or("/");
let (status, content_type, body) = match path {
"/healthz" => {
let snapshot = metrics.snapshot();
let health = NodeHealth::from_snapshot(&snapshot, is_draining);
let code = health.http_status_code();
let json = serde_json::to_string_pretty(&health).unwrap_or_default();
(code, "application/json", json)
}
"/metricsz" => {
let snapshot = metrics.snapshot();
let text = prometheus_text(&snapshot);
(200, "text/plain; version=0.0.4", text)
}
_ => (404, "text/plain", "Not Found\n".to_string()),
};
let response = format!(
"HTTP/1.1 {} {}\r\nContent-Type: {}\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
status,
match status { 200 => "OK", 503 => "Service Unavailable", _ => "Not Found" },
content_type,
body.len(),
body,
);
let _ = stream.write_all(response.as_bytes()).await;
});
}
Err(e) => {
tracing::warn!(error = %e, "health server accept error");
}
}
}
}
}
});
Ok(bound)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::metrics::MeshMetrics;
#[test]
fn health_from_snapshot_healthy() {
let m = MeshMetrics::new();
m.transport("tcp").sent.inc_by(100);
m.transport("tcp").connections.set(5);
m.routing.table_size.set(42);
let snapshot = m.snapshot();
let health = NodeHealth::from_snapshot(&snapshot, false);
assert_eq!(health.status, HealthStatus::Healthy);
assert_eq!(health.connections, 5);
assert_eq!(health.routing_table_size, 42);
assert_eq!(health.http_status_code(), 200);
}
#[test]
fn health_from_snapshot_draining() {
let m = MeshMetrics::new();
let snapshot = m.snapshot();
let health = NodeHealth::from_snapshot(&snapshot, true);
assert_eq!(health.status, HealthStatus::Draining);
assert_eq!(health.http_status_code(), 503);
}
#[test]
fn health_from_snapshot_degraded() {
let m = MeshMetrics::new();
// >10% error rate triggers degraded.
m.transport("tcp").sent.inc_by(10);
m.transport("tcp").send_errors.inc_by(5);
let snapshot = m.snapshot();
let health = NodeHealth::from_snapshot(&snapshot, false);
assert_eq!(health.status, HealthStatus::Degraded);
}
#[test]
fn prometheus_text_format() {
let m = MeshMetrics::new();
m.transport("tcp").sent.inc_by(42);
m.routing.table_size.set(10);
m.store.messages_stored.inc_by(5);
let snapshot = m.snapshot();
let text = prometheus_text(&snapshot);
assert!(text.contains("mesh_uptime_seconds"));
assert!(text.contains("mesh_transport_sent_total{transport=\"tcp\"} 42"));
assert!(text.contains("mesh_routing_table_size 10"));
assert!(text.contains("mesh_store_messages_stored_total 5"));
}
}

View File

@@ -0,0 +1,45 @@
//! Optional NDJSON events for the mesh graph visualizer (`viz/mesh-graph.html`).
//!
//! When the environment variable `QPC_MESH_VIZ_LOG` is set to a file path, one JSON object
//! per line is appended for selected mesh events. The `viz/bridge` binary can tail this file
//! and forward lines to the browser over WebSocket.
use serde::Serialize;
#[derive(Serialize)]
struct HopEvent<'a> {
#[serde(rename = "type")]
kind: &'static str,
from: &'a str,
to: &'a str,
ms: u64,
}
/// Log a relay hop (forwarding to `next_hop`). No-op unless `QPC_MESH_VIZ_LOG` is set.
pub fn log_forward_hop(from_sender: &str, next_hop: &str, latency_ms: u64) {
let Ok(path) = std::env::var("QPC_MESH_VIZ_LOG") else {
return;
};
let ev = HopEvent {
kind: "hop",
from: from_sender,
to: next_hop,
ms: latency_ms,
};
let Ok(line) = serde_json::to_string(&ev) else {
return;
};
append_line(&path, &line);
}
fn append_line(path: &str, line: &str) {
use std::io::Write;
let Ok(mut f) = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open(path)
else {
return;
};
let _ = writeln!(f, "{line}");
}

View File

@@ -0,0 +1,73 @@
//! Integration: [`meshservice`] wire payloads over [`quicprochat_p2p::transport_tcp::TcpTransport`].
//!
//! Demonstrates that the same Ed25519 seed backs both [`MeshIdentity`] (P2P) and
//! [`meshservice::identity::ServiceIdentity`], so service-layer signatures verify after
//! hop-across-TCP. Production mesh would use [`MeshEnvelope`] / iroh; this test keeps
//! the transport boundary explicit.
use meshservice::capabilities;
use meshservice::identity::ServiceIdentity;
use meshservice::router::ServiceRouter;
use meshservice::services::fapp::{create_announce, FappService, Modality, SlotAnnounce, Specialism};
use meshservice::wire;
use quicprochat_p2p::address::MeshAddress;
use quicprochat_p2p::identity::MeshIdentity;
use quicprochat_p2p::transport::MeshTransport;
use quicprochat_p2p::transport_tcp::TcpTransport;
#[tokio::test]
async fn meshservice_fapp_over_tcp_roundtrip() {
let seed = [0x5eu8; 32];
let mesh = MeshIdentity::from_seed(seed);
let service = ServiceIdentity::from_secret(&seed);
assert_eq!(mesh.public_key(), service.public_key());
assert_eq!(
*MeshAddress::from_public_key(&mesh.public_key()).as_bytes(),
service.address()
);
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral],
Modality::VideoCall,
"803",
)
.with_slots(2);
let msg = create_announce(&service, &announce, 1).expect("create_announce");
let frame = wire::encode(&msg).expect("wire encode");
let transport = TcpTransport::bind("127.0.0.1:0")
.await
.expect("bind tcp");
let dest = transport.transport_addr();
let recv = tokio::spawn(async move { transport.recv().await.expect("recv") });
let send_transport = TcpTransport::bind("127.0.0.1:0")
.await
.expect("bind sender");
send_transport
.send(&dest, &frame)
.await
.expect("send");
let packet = recv.await.expect("join recv");
let decoded = wire::decode(&packet.data).expect("wire decode");
assert!(decoded.verify(&service.public_key()));
assert_eq!(decoded.service_id, meshservice::service_ids::FAPP);
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
let action = router
.handle(decoded, Some(service.public_key()))
.expect("router handle");
assert!(
matches!(
action,
meshservice::router::ServiceAction::Store
| meshservice::router::ServiceAction::StoreAndForward
),
"unexpected action: {action:?}"
);
assert!(!router.store().is_empty());
}

View File

@@ -112,9 +112,10 @@ pub mod method_ids {
pub const CHECK_REVOCATION: u16 = 511; pub const CHECK_REVOCATION: u16 = 511;
pub const AUDIT_KEY_TRANSPARENCY: u16 = 520; pub const AUDIT_KEY_TRANSPARENCY: u16 = 520;
// Blob (600-601) // Blob (600-602)
pub const UPLOAD_BLOB: u16 = 600; pub const UPLOAD_BLOB: u16 = 600;
pub const DOWNLOAD_BLOB: u16 = 601; pub const DOWNLOAD_BLOB: u16 = 601;
pub const DELETE_BLOB: u16 = 602;
// Device (700-702, 710) // Device (700-702, 710)
pub const REGISTER_DEVICE: u16 = 700; pub const REGISTER_DEVICE: u16 = 700;

View File

@@ -185,6 +185,13 @@ impl ConversationStore {
identity_key BLOB PRIMARY KEY, identity_key BLOB PRIMARY KEY,
blocked_at_ms INTEGER NOT NULL, blocked_at_ms INTEGER NOT NULL,
reason TEXT NOT NULL DEFAULT '' reason TEXT NOT NULL DEFAULT ''
);
CREATE TABLE IF NOT EXISTS peer_identity_keys (
username TEXT PRIMARY KEY,
identity_key BLOB NOT NULL,
first_seen_ms INTEGER NOT NULL,
last_seen_ms INTEGER NOT NULL
);", );",
) )
.context("migrate conversation db") .context("migrate conversation db")
@@ -524,6 +531,112 @@ impl ConversationStore {
msgs.reverse(); msgs.reverse();
Ok(msgs) Ok(msgs)
} }
// ── Peer identity key tracking ──────────────────────────────────────────
/// Look up the stored identity key for a peer by username.
pub fn get_peer_identity_key(&self, username: &str) -> anyhow::Result<Option<Vec<u8>>> {
let key: Option<Vec<u8>> = self
.conn
.query_row(
"SELECT identity_key FROM peer_identity_keys WHERE username = ?1",
params![username],
|row| row.get(0),
)
.optional()?;
Ok(key)
}
/// Store (or update) a peer's identity key. Returns the previous key if it changed.
pub fn store_peer_identity_key(
&self,
username: &str,
identity_key: &[u8],
) -> anyhow::Result<Option<Vec<u8>>> {
let now_ms = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as i64;
let old = self.get_peer_identity_key(username)?;
self.conn.execute(
"INSERT INTO peer_identity_keys (username, identity_key, first_seen_ms, last_seen_ms)
VALUES (?1, ?2, ?3, ?3)
ON CONFLICT(username) DO UPDATE SET identity_key = ?2, last_seen_ms = ?3",
params![username, identity_key, now_ms],
)?;
// Return the old key only if it's different from the new one.
match old {
Some(ref prev) if prev != identity_key => Ok(old),
_ => Ok(None),
}
}
// ── Full-text search ────────────────────────────────────────────────────
/// Search messages across all conversations by body text.
pub fn search_messages(
&self,
query: &str,
limit: usize,
) -> anyhow::Result<Vec<SearchResult>> {
let pattern = format!("%{query}%");
let mut stmt = self.conn.prepare(
"SELECT m.conversation_id, c.display_name, m.sender_name, m.body,
m.timestamp_ms, m.message_id
FROM messages m
JOIN conversations c ON c.id = m.conversation_id
WHERE m.body LIKE ?1
ORDER BY m.timestamp_ms DESC
LIMIT ?2",
)?;
let rows = stmt.query_map(
params![pattern, limit.min(u32::MAX as usize) as u32],
|row| {
let conv_id_raw: Vec<u8> = row.get(0)?;
let mut conv_id = [0u8; 16];
if conv_id_raw.len() == 16 {
conv_id.copy_from_slice(&conv_id_raw);
}
Ok(SearchResult {
conversation_id: ConversationId(conv_id),
conversation_name: row.get(1)?,
sender_name: row.get(2)?,
body: row.get(3)?,
timestamp_ms: row.get::<_, i64>(4)? as u64,
message_id: row.get(5)?,
})
},
)?;
rows.collect::<Result<Vec<_>, _>>().map_err(Into::into)
}
// ── Conversation deletion ───────────────────────────────────────────────
/// Delete a conversation and all its messages.
pub fn delete_conversation(&self, id: &ConversationId) -> anyhow::Result<bool> {
self.conn
.execute("DELETE FROM messages WHERE conversation_id = ?1", params![id.0.as_slice()])?;
self.conn
.execute("DELETE FROM outbox WHERE conversation_id = ?1", params![id.0.as_slice()])?;
let rows = self
.conn
.execute("DELETE FROM conversations WHERE id = ?1", params![id.0.as_slice()])?;
Ok(rows > 0)
}
}
/// A search result across conversations.
#[derive(Clone, Debug)]
pub struct SearchResult {
pub conversation_id: ConversationId,
pub conversation_name: String,
pub sender_name: Option<String>,
pub body: String,
pub timestamp_ms: u64,
pub message_id: Option<Vec<u8>>,
} }
// ── Helpers ────────────────────────────────────────────────────────────────── // ── Helpers ──────────────────────────────────────────────────────────────────

View File

@@ -24,6 +24,21 @@ pub enum SdkError {
#[error("storage error: {0}")] #[error("storage error: {0}")]
Storage(String), Storage(String),
#[error("session expired — re-login required")]
SessionExpired,
#[error("{0}")] #[error("{0}")]
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
} }
impl SdkError {
/// Returns `true` if the error indicates the session token has expired
/// and the user needs to re-authenticate.
pub fn is_auth_expired(&self) -> bool {
matches!(self, SdkError::SessionExpired)
|| matches!(self, SdkError::Rpc(quicprochat_rpc::error::RpcError::Server {
status: quicprochat_rpc::error::RpcStatus::Unauthorized,
..
}))
}
}

View File

@@ -82,6 +82,32 @@ pub enum ClientEvent {
received_seq: u64, received_seq: u64,
}, },
/// Session token expired — the user must re-authenticate.
/// Emitted when an RPC returns Unauthorized after a previously valid session.
AuthExpired,
/// A peer's identity key changed — possible re-registration, new device,
/// or MITM attack. The UI MUST alert the user (like Signal's "safety number changed").
IdentityKeyChanged {
username: String,
old_fingerprint: String,
new_fingerprint: String,
},
/// A read receipt was received — the reader has read messages up to the given ID.
ReadReceipt {
conversation_id: [u8; 16],
reader: String,
up_to_message_id: Vec<u8>,
timestamp_ms: u64,
},
/// Server confirmed delivery of a message.
DeliveryConfirmation {
conversation_id: [u8; 16],
message_id: Vec<u8>,
},
/// An error occurred in the background. /// An error occurred in the background.
Error { message: String }, Error { message: String },
} }
@@ -219,11 +245,27 @@ mod tests {
expected_seq: 0, expected_seq: 0,
received_seq: 1, received_seq: 1,
}, },
ClientEvent::AuthExpired,
ClientEvent::IdentityKeyChanged {
username: "u".into(),
old_fingerprint: "old".into(),
new_fingerprint: "new".into(),
},
ClientEvent::ReadReceipt {
conversation_id: [0; 16],
reader: "r".into(),
up_to_message_id: vec![],
timestamp_ms: 0,
},
ClientEvent::DeliveryConfirmation {
conversation_id: [0; 16],
message_id: vec![],
},
ClientEvent::Error { message: "e".into() }, ClientEvent::Error { message: "e".into() },
]; ];
for event in &events { for event in &events {
let _ = event.clone(); let _ = event.clone();
} }
assert_eq!(events.len(), 17); assert_eq!(events.len(), 21);
} }
} }

View File

@@ -142,15 +142,33 @@ pub fn format_actor(identity_key: &[u8], redact: bool) -> String {
} }
} }
/// Current ISO-8601 UTC timestamp. /// Current ISO-8601 UTC timestamp (e.g. `2026-04-04T12:30:45Z`).
pub fn now_iso8601() -> String { pub fn now_iso8601() -> String {
// Use SystemTime to avoid pulling in chrono.
let d = std::time::SystemTime::now() let d = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default(); .unwrap_or_default();
let secs = d.as_secs(); let secs = d.as_secs();
// Simple UTC formatting: enough for audit logs.
format!("{secs}") // Manual UTC calendar conversion — avoids pulling in chrono.
let days = secs / 86400;
let time_of_day = secs % 86400;
let hours = time_of_day / 3600;
let minutes = (time_of_day % 3600) / 60;
let seconds = time_of_day % 60;
// Civil date from day count (epoch = 1970-01-01, algorithm from Howard Hinnant).
let z = days as i64 + 719468;
let era = if z >= 0 { z } else { z - 146096 } / 146097;
let doe = (z - era * 146097) as u64; // day of era [0, 146096]
let yoe = (doe - doe / 1460 + doe / 36524 - doe / 146096) / 365;
let y = yoe as i64 + era * 400;
let doy = doe - (365 * yoe + yoe / 4 - yoe / 100);
let mp = (5 * doy + 2) / 153;
let d = doy - (153 * mp + 2) / 5 + 1;
let m = if mp < 10 { mp + 3 } else { mp - 9 };
let y = if m <= 2 { y + 1 } else { y };
format!("{y:04}-{m:02}-{d:02}T{hours:02}:{minutes:02}:{seconds:02}Z")
} }
#[cfg(test)] #[cfg(test)]

View File

@@ -194,4 +194,27 @@ impl BlobService {
mime_type: meta.mime_type, mime_type: meta.mime_type,
}) })
} }
/// Delete a blob and its metadata from disk.
pub fn delete_blob(&self, blob_id: &[u8]) -> Result<bool, DomainError> {
if blob_id.len() != 32 {
return Err(DomainError::BlobHashLength(blob_id.len()));
}
let blob_hex = hex::encode(blob_id);
let dir = self.blobs_dir();
let blob_path = dir.join(&blob_hex);
let meta_path = dir.join(format!("{blob_hex}.meta"));
let part_path = dir.join(format!("{blob_hex}.part"));
if !blob_path.exists() && !part_path.exists() {
return Ok(false);
}
let _ = std::fs::remove_file(&blob_path);
let _ = std::fs::remove_file(&meta_path);
let _ = std::fs::remove_file(&part_path);
Ok(true)
}
} }

View File

@@ -34,6 +34,38 @@ mod ws_bridge;
#[cfg(feature = "webtransport")] #[cfg(feature = "webtransport")]
mod webtransport; mod webtransport;
/// Parse `QPC_ADMIN_KEYS` env var — comma-separated hex-encoded Ed25519 public keys.
/// Returns empty vec if unset (backward-compatible: all users can moderate).
#[cfg(feature = "webtransport")]
fn parse_admin_keys() -> Vec<Vec<u8>> {
let Ok(val) = std::env::var("QPC_ADMIN_KEYS") else {
return Vec::new();
};
val.split(',')
.filter_map(|s| {
let s = s.trim();
if s.is_empty() {
return None;
}
match hex::decode(s) {
Ok(key) if key.len() == 32 => Some(key),
Ok(key) => {
tracing::warn!(
len = key.len(),
hex = s,
"QPC_ADMIN_KEYS: ignoring key with wrong length (expected 32 bytes)"
);
None
}
Err(e) => {
tracing::warn!(hex = s, error = %e, "QPC_ADMIN_KEYS: ignoring invalid hex");
None
}
}
})
.collect()
}
use auth::{AuthConfig, PendingLogin, RateEntry, SessionInfo}; use auth::{AuthConfig, PendingLogin, RateEntry, SessionInfo};
use config::{ use config::{
load_config, merge_config, validate_production_config, DEFAULT_DATA_DIR, DEFAULT_DB_PATH, load_config, merge_config, validate_production_config, DEFAULT_DATA_DIR, DEFAULT_DB_PATH,
@@ -147,6 +179,15 @@ struct Args {
/// Storage/database operation timeout in seconds (default: 10). /// Storage/database operation timeout in seconds (default: 10).
#[arg(long, env = "QPQ_STORAGE_TIMEOUT", default_value_t = config::DEFAULT_STORAGE_TIMEOUT_SECS)] #[arg(long, env = "QPQ_STORAGE_TIMEOUT", default_value_t = config::DEFAULT_STORAGE_TIMEOUT_SECS)]
storage_timeout: u64, storage_timeout: u64,
/// Enable traffic analysis resistance (decoy traffic + timing jitter).
/// Requires --features traffic-resistance.
#[arg(long, env = "QPQ_TRAFFIC_RESISTANCE", default_value_t = false)]
traffic_resistance: bool,
/// Mean interval in milliseconds between decoy messages (default: 5000).
#[arg(long, env = "QPQ_DECOY_INTERVAL_MS", default_value_t = 5000)]
decoy_interval_ms: u64,
} }
// ── In-flight RPC guard ────────────────────────────────────────────────────── // ── In-flight RPC guard ──────────────────────────────────────────────────────
@@ -433,6 +474,7 @@ async fn main() -> anyhow::Result<()> {
storage_backend: effective.store_backend.clone(), storage_backend: effective.store_backend.clone(),
federation_client: None, federation_client: None,
local_domain: effective.federation.as_ref().map(|f| f.domain.clone()).unwrap_or_default(), local_domain: effective.federation.as_ref().map(|f| f.domain.clone()).unwrap_or_default(),
admin_keys: parse_admin_keys(),
}); });
let wt_registry = Arc::new(v2_handlers::build_registry( let wt_registry = Arc::new(v2_handlers::build_registry(
@@ -613,6 +655,40 @@ async fn main() -> anyhow::Result<()> {
"effective timeouts and listeners" "effective timeouts and listeners"
); );
// ── Traffic resistance (decoy traffic generator) ──────────────────────────
#[cfg(feature = "traffic-resistance")]
let _decoy_handle = {
if args.traffic_resistance {
let shutdown_notify = Arc::new(tokio::sync::Notify::new());
let delivery_svc = Arc::new(domain::delivery::DeliveryService {
store: Arc::clone(&store),
waiters: Arc::clone(&waiters),
});
let config = domain::traffic_resistance::TrafficResistanceConfig {
decoy_interval_ms: args.decoy_interval_ms,
..Default::default()
};
tracing::info!(
decoy_interval_ms = config.decoy_interval_ms,
jitter_max_ms = config.jitter_max_ms,
padding_boundary = config.padding_boundary,
"traffic resistance enabled — decoy generator started"
);
// Start with an empty recipient list; decoys will be a no-op until
// recipients are populated. A future enhancement can dynamically
// update the list from connected sessions.
Some(domain::traffic_resistance::spawn_decoy_generator(
delivery_svc,
Vec::new(),
b"decoy-channel".to_vec(),
config,
shutdown_notify,
))
} else {
None
}
};
// In-flight RPC counter for graceful drain on shutdown. // In-flight RPC counter for graceful drain on shutdown.
let in_flight: Arc<AtomicUsize> = Arc::new(AtomicUsize::new(0)); let in_flight: Arc<AtomicUsize> = Arc::new(AtomicUsize::new(0));

View File

@@ -99,3 +99,32 @@ pub async fn handle_download_blob(state: Arc<ServerState>, ctx: RequestContext)
Err(e) => domain_err(e), Err(e) => domain_err(e),
} }
} }
pub async fn handle_delete_blob(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult {
let _identity_key = match require_auth(&state, &ctx) {
Ok(ik) => ik,
Err(e) => return e,
};
let req = match v1::DeleteBlobRequest::decode(ctx.payload) {
Ok(r) => r,
Err(e) => {
return HandlerResult::err(
quicprochat_rpc::error::RpcStatus::BadRequest,
&format!("decode: {e}"),
)
}
};
let svc = BlobService {
data_dir: state.data_dir.clone(),
};
match svc.delete_blob(&req.blob_id) {
Ok(deleted) => {
let proto = v1::DeleteBlobResponse { deleted };
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
}
Err(e) => domain_err(e),
}
}

View File

@@ -42,9 +42,18 @@ pub async fn handle_remove_member(
store: Arc::clone(&state.store), store: Arc::clone(&state.store),
}; };
// Only group creator (admin) can remove members.
if let Ok(Some(meta)) = svc.get_metadata(&req.group_id) {
if !meta.creator_key.is_empty() && meta.creator_key != identity_key {
return HandlerResult::err(
RpcStatus::Forbidden,
"only the group creator can remove members",
);
}
}
match svc.remove_member(&req.group_id, &req.member_identity_key) { match svc.remove_member(&req.group_id, &req.member_identity_key) {
Ok(_) => { Ok(_) => {
let _ = identity_key; // caller is authorized; removal tracked
let proto = v1::RemoveMemberResponse { let proto = v1::RemoveMemberResponse {
commit: Vec::new(), // commit is generated client-side commit: Vec::new(), // commit is generated client-side
}; };
@@ -73,6 +82,16 @@ pub async fn handle_update_group_metadata(
store: Arc::clone(&state.store), store: Arc::clone(&state.store),
}; };
// Only group creator (admin) can update metadata.
if let Ok(Some(meta)) = svc.get_metadata(&req.group_id) {
if !meta.creator_key.is_empty() && meta.creator_key != identity_key {
return HandlerResult::err(
RpcStatus::Forbidden,
"only the group creator can update metadata",
);
}
}
let domain_req = UpdateGroupMetadataReq { let domain_req = UpdateGroupMetadataReq {
group_id: req.group_id, group_id: req.group_id,
name: req.name, name: req.name,

View File

@@ -68,6 +68,8 @@ pub struct ServerState {
pub federation_client: Option<Arc<crate::federation::FederationClient>>, pub federation_client: Option<Arc<crate::federation::FederationClient>>,
/// This server's domain for federation addressing. Empty when federation is disabled. /// This server's domain for federation addressing. Empty when federation is disabled.
pub local_domain: String, pub local_domain: String,
/// Admin identity keys (from `QPC_ADMIN_USERS` env or config). Empty = allow all (MVP).
pub admin_keys: Vec<Vec<u8>>,
} }
/// A ban record for a user. /// A ban record for a user.
@@ -316,6 +318,11 @@ pub fn build_registry(default_rpc_timeout: std::time::Duration) -> MethodRegistr
std::time::Duration::from_secs(120), std::time::Duration::from_secs(120),
blob::handle_download_blob, blob::handle_download_blob,
); );
reg.register(
method_ids::DELETE_BLOB,
"DeleteBlob",
blob::handle_delete_blob,
);
// Device (700-702) // Device (700-702)
reg.register( reg.register(

View File

@@ -1,4 +1,8 @@
//! Moderation handlers — report, ban, unban, list reports, list banned. //! Moderation handlers — report, ban, unban, list reports, list banned.
//!
//! All mutations are persisted via `ModerationService` (SQL store).
//! The in-memory `banned_users` DashMap is kept as a hot cache for the
//! auth middleware's fast-path ban check.
use std::sync::Arc; use std::sync::Arc;
@@ -9,7 +13,34 @@ use quicprochat_rpc::error::RpcStatus;
use quicprochat_rpc::method::{HandlerResult, RequestContext}; use quicprochat_rpc::method::{HandlerResult, RequestContext};
use tracing::{info, warn}; use tracing::{info, warn};
use super::{require_auth, BanRecord, ModerationReport, ServerState}; use crate::domain::moderation::ModerationService;
use crate::domain::types::*;
use super::{require_auth, BanRecord, ServerState};
/// Build a `ModerationService` from shared state.
fn mod_service(state: &ServerState) -> ModerationService {
ModerationService {
store: Arc::clone(&state.store),
}
}
/// Check whether the caller is an admin. Admins are identified by identity
/// key listed in `state.admin_keys`. Returns `Err(HandlerResult)` with
/// `Forbidden` status for non-admins.
fn require_admin(state: &ServerState, identity_key: &[u8]) -> Result<(), HandlerResult> {
if state.admin_keys.is_empty() {
// No admin list configured — allow all (backward-compatible MVP behavior).
return Ok(());
}
if state.admin_keys.iter().any(|k| k.as_slice() == identity_key) {
return Ok(());
}
Err(HandlerResult::err(
RpcStatus::Forbidden,
"admin role required",
))
}
/// Submit an encrypted report. Any authenticated user can report. /// Submit an encrypted report. Any authenticated user can report.
pub async fn handle_report_message(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult { pub async fn handle_report_message(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult {
@@ -23,81 +54,91 @@ pub async fn handle_report_message(state: Arc<ServerState>, ctx: RequestContext)
Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")), Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")),
}; };
if req.encrypted_report.is_empty() { let svc = mod_service(&state);
return HandlerResult::err(RpcStatus::BadRequest, "encrypted_report required"); match svc.report_message(ReportMessageReq {
encrypted_report: req.encrypted_report,
conversation_id: req.conversation_id,
reporter_identity: identity_key.clone(),
}) {
Ok(resp) => {
info!(
reporter = hex::encode(&identity_key[..4.min(identity_key.len())]),
"moderation report submitted (persisted)"
);
let proto = v1::ReportMessageResponse {
accepted: resp.accepted,
};
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
}
Err(DomainError::BadParams(msg)) => HandlerResult::err(RpcStatus::BadRequest, &msg),
Err(e) => {
warn!(error = %e, "report_message failed");
HandlerResult::err(RpcStatus::Internal, "internal error")
}
} }
let now = crate::auth::current_timestamp();
let report = {
let mut reports = match state.moderation_reports.lock() {
Ok(r) => r,
Err(e) => {
warn!("moderation_reports lock poisoned: {e}");
return HandlerResult::err(RpcStatus::Internal, "internal error");
}
};
let id = reports.len() as u64;
let report = ModerationReport {
id,
encrypted_report: req.encrypted_report,
conversation_id: req.conversation_id,
reporter_identity: identity_key.clone(),
timestamp: now,
};
reports.push(report.clone());
report
};
info!(
report_id = report.id,
reporter = hex::encode(&identity_key[..4.min(identity_key.len())]),
"moderation report submitted"
);
let proto = v1::ReportMessageResponse { accepted: true };
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
} }
/// Ban a user. Requires admin role (currently: any authenticated user for MVP). /// Ban a user. Requires admin role.
pub async fn handle_ban_user(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult { pub async fn handle_ban_user(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult {
let admin_key = match require_auth(&state, &ctx) { let admin_key = match require_auth(&state, &ctx) {
Ok(ik) => ik, Ok(ik) => ik,
Err(e) => return e, Err(e) => return e,
}; };
if let Err(e) = require_admin(&state, &admin_key) {
return e;
}
let req = match v1::BanUserRequest::decode(ctx.payload) { let req = match v1::BanUserRequest::decode(ctx.payload) {
Ok(r) => r, Ok(r) => r,
Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")), Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")),
}; };
if req.identity_key.is_empty() || req.identity_key.len() != 32 { let svc = mod_service(&state);
return HandlerResult::err(RpcStatus::BadRequest, "identity_key must be 32 bytes"); match svc.ban_user(BanUserReq {
} identity_key: req.identity_key.clone(),
let now = crate::auth::current_timestamp();
let expires_at = if req.duration_secs == 0 {
0 // permanent
} else {
now + req.duration_secs
};
let record = BanRecord {
reason: req.reason.clone(), reason: req.reason.clone(),
banned_at: now, duration_secs: req.duration_secs,
expires_at, }) {
}; Ok(resp) => {
state.banned_users.insert(req.identity_key.clone(), record); // Update hot cache so auth middleware picks it up immediately.
let now = crate::auth::current_timestamp();
let expires_at = if req.duration_secs == 0 {
0
} else {
now + req.duration_secs
};
state.banned_users.insert(
req.identity_key.clone(),
BanRecord {
reason: req.reason.clone(),
banned_at: now,
expires_at,
},
);
info!( info!(
target_key = hex::encode(&req.identity_key[..4]), target_key = hex::encode(&req.identity_key[..4.min(req.identity_key.len())]),
admin_key = hex::encode(&admin_key[..4.min(admin_key.len())]), admin_key = hex::encode(&admin_key[..4.min(admin_key.len())]),
reason = %req.reason, reason = %req.reason,
duration_secs = req.duration_secs, duration_secs = req.duration_secs,
"user banned" "user banned (persisted)"
); );
let proto = v1::BanUserResponse { success: true }; let proto = v1::BanUserResponse {
HandlerResult::ok(Bytes::from(proto.encode_to_vec())) success: resp.success,
};
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
}
Err(DomainError::InvalidIdentityKey(len)) => HandlerResult::err(
RpcStatus::BadRequest,
&format!("identity_key must be 32 bytes, got {len}"),
),
Err(e) => {
warn!(error = %e, "ban_user failed");
HandlerResult::err(RpcStatus::Internal, "internal error")
}
}
} }
/// Unban a user. Requires admin role. /// Unban a user. Requires admin role.
@@ -107,6 +148,10 @@ pub async fn handle_unban_user(state: Arc<ServerState>, ctx: RequestContext) ->
Err(e) => return e, Err(e) => return e,
}; };
if let Err(e) = require_admin(&state, &admin_key) {
return e;
}
let req = match v1::UnbanUserRequest::decode(ctx.payload) { let req = match v1::UnbanUserRequest::decode(ctx.payload) {
Ok(r) => r, Ok(r) => r,
Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")), Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")),
@@ -116,84 +161,115 @@ pub async fn handle_unban_user(state: Arc<ServerState>, ctx: RequestContext) ->
return HandlerResult::err(RpcStatus::BadRequest, "identity_key required"); return HandlerResult::err(RpcStatus::BadRequest, "identity_key required");
} }
let removed = state.banned_users.remove(&req.identity_key).is_some(); let svc = mod_service(&state);
match svc.unban_user(UnbanUserReq {
identity_key: req.identity_key.clone(),
}) {
Ok(resp) => {
// Remove from hot cache.
state.banned_users.remove(&req.identity_key);
info!( info!(
target_key = hex::encode(&req.identity_key[..4.min(req.identity_key.len())]), target_key = hex::encode(&req.identity_key[..4.min(req.identity_key.len())]),
admin_key = hex::encode(&admin_key[..4.min(admin_key.len())]), admin_key = hex::encode(&admin_key[..4.min(admin_key.len())]),
removed, removed = resp.success,
"user unbanned" "user unbanned (persisted)"
); );
let proto = v1::UnbanUserResponse { success: removed }; let proto = v1::UnbanUserResponse {
HandlerResult::ok(Bytes::from(proto.encode_to_vec())) success: resp.success,
};
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
}
Err(e) => {
warn!(error = %e, "unban_user failed");
HandlerResult::err(RpcStatus::Internal, "internal error")
}
}
} }
/// List moderation reports. Requires admin role. /// List moderation reports. Requires admin role.
pub async fn handle_list_reports(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult { pub async fn handle_list_reports(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult {
let _admin_key = match require_auth(&state, &ctx) { let admin_key = match require_auth(&state, &ctx) {
Ok(ik) => ik, Ok(ik) => ik,
Err(e) => return e, Err(e) => return e,
}; };
if let Err(e) = require_admin(&state, &admin_key) {
return e;
}
let req = match v1::ListReportsRequest::decode(ctx.payload) { let req = match v1::ListReportsRequest::decode(ctx.payload) {
Ok(r) => r, Ok(r) => r,
Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")), Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")),
}; };
let reports = match state.moderation_reports.lock() { let limit = if req.limit == 0 { 50 } else { req.limit };
Ok(r) => r,
Err(e) => { let svc = mod_service(&state);
warn!("moderation_reports lock poisoned: {e}"); match svc.list_reports(ListReportsReq {
return HandlerResult::err(RpcStatus::Internal, "internal error"); limit,
offset: req.offset,
}) {
Ok(resp) => {
let entries: Vec<v1::ReportEntry> = resp
.reports
.into_iter()
.map(|r| v1::ReportEntry {
id: r.id,
encrypted_report: r.encrypted_report,
conversation_id: r.conversation_id,
reporter_identity: r.reporter_identity,
timestamp: r.timestamp,
})
.collect();
let proto = v1::ListReportsResponse { reports: entries };
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
} }
}; Err(e) => {
warn!(error = %e, "list_reports failed");
let offset = req.offset as usize; HandlerResult::err(RpcStatus::Internal, "internal error")
let limit = if req.limit == 0 { 50 } else { req.limit as usize }; }
}
let entries: Vec<v1::ReportEntry> = reports
.iter()
.skip(offset)
.take(limit)
.map(|r| v1::ReportEntry {
id: r.id,
encrypted_report: r.encrypted_report.clone(),
conversation_id: r.conversation_id.clone(),
reporter_identity: r.reporter_identity.clone(),
timestamp: r.timestamp,
})
.collect();
let proto = v1::ListReportsResponse { reports: entries };
HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
} }
/// List banned users. /// List banned users. Requires admin role.
pub async fn handle_list_banned(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult { pub async fn handle_list_banned(state: Arc<ServerState>, ctx: RequestContext) -> HandlerResult {
let _admin_key = match require_auth(&state, &ctx) { let admin_key = match require_auth(&state, &ctx) {
Ok(ik) => ik, Ok(ik) => ik,
Err(e) => return e, Err(e) => return e,
}; };
if let Err(e) = require_admin(&state, &admin_key) {
return e;
}
let _req = match v1::ListBannedRequest::decode(ctx.payload) { let _req = match v1::ListBannedRequest::decode(ctx.payload) {
Ok(r) => r, Ok(r) => r,
Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")), Err(e) => return HandlerResult::err(RpcStatus::BadRequest, &format!("decode: {e}")),
}; };
let now = crate::auth::current_timestamp(); let svc = mod_service(&state);
let entries: Vec<v1::BannedUserEntry> = state match svc.list_banned() {
.banned_users Ok(resp) => {
.iter() let entries: Vec<v1::BannedUserEntry> = resp
.filter(|entry| entry.expires_at == 0 || entry.expires_at > now) .users
.map(|entry| v1::BannedUserEntry { .into_iter()
identity_key: entry.key().clone(), .map(|u| v1::BannedUserEntry {
reason: entry.reason.clone(), identity_key: u.identity_key,
banned_at: entry.banned_at, reason: u.reason,
expires_at: entry.expires_at, banned_at: u.banned_at,
}) expires_at: u.expires_at,
.collect(); })
.collect();
let proto = v1::ListBannedResponse { users: entries }; let proto = v1::ListBannedResponse { users: entries };
HandlerResult::ok(Bytes::from(proto.encode_to_vec())) HandlerResult::ok(Bytes::from(proto.encode_to_vec()))
}
Err(e) => {
warn!(error = %e, "list_banned failed");
HandlerResult::err(RpcStatus::Internal, "internal error")
}
}
} }

View File

@@ -1,5 +1,54 @@
# Status Log # Status Log
## 2026-04-11 — Observability & MeshNode run() wiring
### Completed
- **observability.rs** — new module with health checks, Prometheus text export, HTTP server
- `NodeHealth` struct with per-subsystem health checks (transport, routing, store)
- `HealthStatus` enum (Healthy/Degraded/Draining/Unhealthy) with HTTP status codes
- `prometheus_text()` — renders `MetricsSnapshot` in Prometheus exposition format
- `HealthServer` — lightweight TCP-based HTTP server for `/healthz` and `/metricsz`
- **MeshNode.run()** — starts background tasks and returns a `RunHandle`
- Periodic GC task (store, routing table, rate limiters) with configurable interval
- Health/metrics HTTP server (optional, via `MeshNodeBuilder.health_listen()`)
- Shutdown coordination via `watch` channel
- **RunHandle** — public API for interacting with a running node
- `.node()` — access to the MeshNode
- `.health()` — current health snapshot
- `.metrics_snapshot()` — current metrics
- `.health_addr()` — bound health server address
- `.shutdown()` — graceful shutdown (signals tasks + drains transports)
- **Tracing spans** — `#[tracing::instrument]` on `process_incoming()` and `send()`
- Includes sender/dest address and payload length as span fields
- GC cycle wrapped in `mesh_gc` info span
- **Draining flag** — `AtomicBool` for shutdown awareness; health endpoint returns 503
### Test Coverage
- 232 total tests passing (212 lib + 3 fapp_flow + 1 meshservice + 16 multi_node)
- 7 new observability unit tests (health healthy/degraded/draining, prometheus format)
- Full workspace `cargo check` clean
### What's Next
1. Wire `MeshNode.run()` into an example binary or the server
2. Announce loop task (periodic re-announce to neighbors)
3. Grafana dashboard for mesh metrics
4. Integration test for health HTTP endpoint
---
## 2026-04-01 — meshservice workspace integration
### Completed
- **Workspace** — `crates/meshservice/` is a workspace member (`Cargo.toml`); `cargo check -p meshservice` and full `cargo check --workspace` succeed.
- **P2P bridge test** — `crates/quicprochat-p2p/tests/meshservice_tcp_transport.rs`: same Ed25519 seed for `MeshIdentity` and `meshservice::ServiceIdentity`; FAPP announce encoded with `meshservice::wire`, sent over `TcpTransport`, decoded and handled by `ServiceRouter` + `FappService::relay()`.
- **Client command engine** — `SlashCommand::MeshTrace` / `MeshStats` wired through `Command` and `execute_slash` (fixes non-exhaustive match); playbook steps `mesh-trace` / `mesh-stats` added.
### Integration notes
- **Transport**: `meshservice` is transport-agnostic; carry `wire::encode` bytes inside `MeshEnvelope` / mesh ALPN (`quicprochat/mesh/1`) for production — not yet a direct dependency from `quicprochat-p2p` lib code.
- **FAPP duplication**: `quicprochat-p2p::fapp` (legacy mesh FAPP) and `meshservice::services::fapp` (generic service layer) coexist; long-term alignment TBD.
---
## 2026-04-01 — Production Infrastructure Sprint ## 2026-04-01 — Production Infrastructure Sprint
### Completed ### Completed
@@ -58,6 +107,78 @@
--- ---
## 2026-04-01 — MeshNode: Production Integration
### Completed
- **MeshNode** — `mesh_node.rs`: Production-ready node integrating all subsystems
- `MeshNodeBuilder`: Fluent API for configuration
- `MeshConfig` integration for all settings
- `MeshMetrics` tracking for all operations
- Rate limiting on incoming messages via `RateLimiter`
- Backpressure control via `BackpressureController`
- Graceful shutdown via `ShutdownCoordinator`
- Optional `FappRouter` based on capabilities
- `MeshRouter` for envelope routing
- `TransportManager` for multi-transport support
### Key APIs
```rust
// Build a mesh node
let node = MeshNodeBuilder::new()
.config(config)
.identity(identity)
.fapp_relay()
.fapp_patient()
.build()
.await?;
// Process incoming with rate limiting + metrics
let action = node.process_incoming(&sender_addr, envelope)?;
// Garbage collection
node.gc()?;
// Graceful shutdown
node.shutdown().await;
```
### Test Coverage
- 222 total tests (203 lib + 3 fapp_flow + 16 multi_node)
- 5 new mesh_node tests
---
## 2026-04-01 — FAPP: Complete E2E Flow
### Completed (Latest)
- **E2E Encryption** — `fapp.rs`: SlotReserve/SlotConfirm with X25519 + ChaCha20-Poly1305
- `PatientEphemeralKey`: generates X25519 keypair for reservation
- `TherapistCrypto`: decrypts reserves, creates confirms with forward secrecy
- `PatientCrypto`: creates reserves, decrypts confirmations
- Each confirmation uses fresh ephemeral key for forward secrecy
- **FappRouter Reserve/Confirm** — `fapp_router.rs`:
- `DeliverReserve` / `DeliverConfirm` action variants
- `process_slot_reserve()`: routes to therapist or floods
- `process_slot_confirm()`: delivers to patient
- `send_reserve()` / `send_confirm()`: capability-checked sends
- `send_response()`: relay-to-patient response routing
- **Integration Tests** — `tests/fapp_flow.rs`:
- `full_fapp_flow_announce_query_reserve_confirm`: Complete flow from announce to confirmed appointment
- `fapp_rejection_flow`: Tests therapist declining a reservation
- `fapp_query_filters`: Tests Fachrichtung, PLZ, and other filters
### Test Coverage
- 217 total tests (198 lib + 3 fapp_flow + 16 multi_node)
- 31 FAPP-specific tests (24 fapp + 7 fapp_router)
### What's Next
1. Wire FappRouter into P2pNode startup
2. LoRa testing for FAPP messages
---
## 2026-03-31 — FAPP: Free Appointment Propagation Protocol ## 2026-03-31 — FAPP: Free Appointment Propagation Protocol
### Completed ### Completed
@@ -67,7 +188,6 @@
- **Domain model**: Fachrichtung, Modalitaet, Kostentraeger, SlotType (German enum names for domain concepts) - **Domain model**: Fachrichtung, Modalitaet, Kostentraeger, SlotType (German enum names for domain concepts)
- **FappStore**: in-memory cache with dedup (therapist_address + sequence), TTL expiry, signature verification, capacity limits - **FappStore**: in-memory cache with dedup (therapist_address + sequence), TTL expiry, signature verification, capacity limits
- **Query matching**: filter by Fachrichtung, Modalitaet, Kostentraeger, PLZ prefix, time range, SlotType, max_results - **Query matching**: filter by Fachrichtung, Modalitaet, Kostentraeger, PLZ prefix, time range, SlotType, max_results
- **Tests**: 16 inline tests covering creation, signing, verification, tampering, forwarding, expiry, CBOR roundtrip, store dedup, sequence supersede, query filters (PLZ, SlotType, Kostentraeger, max_results)
- **Privacy model**: therapist identity public (Approbation-bound), patient queries anonymous - **Privacy model**: therapist identity public (Approbation-bound), patient queries anonymous
### Design Decisions ### Design Decisions
@@ -77,32 +197,6 @@
- Location hint is PLZ only (e.g. "80331") — never exact address - Location hint is PLZ only (e.g. "80331") — never exact address
- Anti-spam: Approbation hash binding, signature verification, sequence-based dedup, rate limiting, TTL enforcement - Anti-spam: Approbation hash binding, signature verification, sequence-based dedup, rate limiting, TTL enforcement
### FAPP integration — status
**2026-04-01: FappRouter implemented!**
New `fapp_router.rs` module:
- `FappAction` enum: Ignore, Dropped, Forward, QueryResponse
- Wire format: 1-byte tag (0x01-0x05) + CBOR body
- `FappRouter` struct with shared `RoutingTable` + `TransportManager`
- `handle_incoming()` decodes and dispatches FAPP frames
- `process_slot_announce()` with relay/flood logic (dedup, hop check, store, forward)
- `process_slot_query()` answers from local `FappStore`
- `broadcast_announce()` / `send_query()` for outbound floods
- `drain_pending_sends()` for async send integration
- 3 unit tests passing
**Remaining steps**
1. **Integration test:** Multi-node demo (therapist → relay → patient flow)
2. **Wire to P2pNode:** Add `FappRouter` to `start_with_mesh()` or similar
3. **SlotReserve/SlotConfirm:** E2E encrypted reservation flow
4. **LoRa test:** Verify FAPP over constrained links
**Definition of done**
- announce → query → response works over multi-hop (automated or manual)
- SlotReserve/Confirm E2E encryption works
- LoRa test or documented blocker
--- ---
## 2026-03-30 — Mesh Protocol Infrastructure Sprint ## 2026-03-30 — Mesh Protocol Infrastructure Sprint

22
paper/Makefile Normal file
View File

@@ -0,0 +1,22 @@
MAIN = fapp
BIB = fapp-refs
.PHONY: all clean watch
all: $(MAIN).pdf
$(MAIN).pdf: $(MAIN).tex $(BIB).bib
pdflatex -interaction=nonstopmode $(MAIN)
bibtex $(MAIN)
pdflatex -interaction=nonstopmode $(MAIN)
pdflatex -interaction=nonstopmode $(MAIN)
clean:
rm -f $(MAIN).{aux,bbl,blg,log,out,pdf,toc,lof,lot,fls,fdb_latexmk,synctex.gz}
watch:
@echo "Watching for changes..."
@while true; do \
inotifywait -qe modify $(MAIN).tex $(BIB).bib 2>/dev/null || sleep 2; \
$(MAKE) all; \
done

263
paper/fapp-refs.bib Normal file
View File

@@ -0,0 +1,263 @@
@misc{rfc9000,
author = {Jana Iyengar and Martin Thomson},
title = {{QUIC}: A {UDP}-Based Multiplexed and Secure Transport},
howpublished = {RFC 9000},
year = {2021},
month = may,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC9000},
}
@misc{rfc9420,
author = {Richard Barnes and Benjamin Beurdouche and Raphael Robert and Jon Millican and Emad Omara and Katriel Cohn-Gordon},
title = {The Messaging Layer Security ({MLS}) Protocol},
howpublished = {RFC 9420},
year = {2023},
month = jul,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC9420},
}
@misc{rfc8032,
author = {Simon Josefsson and Ilari Liusvaara},
title = {Edwards-Curve Digital Signature Algorithm ({EdDSA})},
howpublished = {RFC 8032},
year = {2017},
month = jan,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC8032},
}
@misc{rfc7748,
author = {Adam Langley and Mike Hamburg and Sean Turner},
title = {Elliptic Curves for Security},
howpublished = {RFC 7748},
year = {2016},
month = jan,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC7748},
}
@misc{rfc8439,
author = {Yoav Nir and Adam Langley},
title = {{ChaCha20} and {Poly1305} for {IETF} Protocols},
howpublished = {RFC 8439},
year = {2018},
month = jun,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC8439},
}
@misc{rfc5869,
author = {Hugo Krawczyk and Pasi Eronen},
title = {{HMAC}-Based Extract-and-Expand Key Derivation Function ({HKDF})},
howpublished = {RFC 5869},
year = {2010},
month = may,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC5869},
}
@misc{rfc8949,
author = {Carsten Bormann and Paul Hoffman},
title = {Concise Binary Object Representation ({CBOR})},
howpublished = {RFC 8949},
year = {2020},
month = dec,
publisher = {Internet Engineering Task Force},
doi = {10.17487/RFC8949},
}
@article{bpt2022wartezeiten,
author = {{Bundespsychotherapeutenkammer}},
title = {{BPtK}-Studie: Wartezeiten in der ambulanten psychotherapeutischen Versorgung},
journal = {BPtK Forschung},
year = {2022},
note = {Available at \url{https://www.bptk.de}},
}
@article{bpt2024versorgung,
author = {{Bundespsychotherapeutenkammer}},
title = {Ein Jahr nach der Reform der Psychotherapie-Richtlinie},
journal = {BPtK Forschung},
year = {2024},
note = {Available at \url{https://www.bptk.de}},
}
@article{jacobi2014psychische,
author = {Frank Jacobi and Michael H{\"o}fler and Jens Strehle and Simon Mack and Axel Gerschler and Lucie Scholl and Manfred E. Beutel and Wolfgang Maier and Borwin Bandelow and Harald Jurgen Freyberger and Hans-Ulrich Wittchen},
title = {Mental disorders in the general population: Study on the health of adults in {Germany} and the additional module mental health ({DEGS1-MH})},
journal = {Der Nervenarzt},
volume = {85},
number = {1},
pages = {77--87},
year = {2014},
doi = {10.1007/s00115-013-3961-y},
}
@article{schlack2023mental,
author = {Robert Schlack and Heike Hölling and Liane Sann and Christian Schmidt and Elvira Mauz and Thomas Lampert},
title = {Mental health of children and adolescents during the {COVID-19} pandemic},
journal = {Journal of Health Monitoring},
volume = {8},
number = {S1},
year = {2023},
doi = {10.25646/11043},
}
@inproceedings{goldschlag1996onion,
author = {David M. Goldschlag and Michael G. Reed and Paul F. Syverson},
title = {Hiding Routing Information},
booktitle = {Information Hiding: First International Workshop},
pages = {137--150},
year = {1996},
publisher = {Springer},
doi = {10.1007/3-540-61996-8_37},
}
@article{lora2015semtech,
author = {{Semtech Corporation}},
title = {{LoRa} Modulation Basics},
journal = {Semtech Application Note AN1200.22},
year = {2015},
}
@misc{loraalliance2020,
author = {{LoRa Alliance}},
title = {{LoRaWAN} Specification v1.0.4},
year = {2020},
note = {Available at \url{https://lora-alliance.org/resource-hub/}},
}
@misc{eu868dutycycle,
author = {{European Telecommunications Standards Institute}},
title = {{ETSI} {EN} 300 220: Short Range Devices ({SRD})},
year = {2019},
note = {Electromagnetic compatibility and Radio spectrum Matters},
}
@inproceedings{borisov2004offrecord,
author = {Nikita Borisov and Ian Goldberg and Eric Brewer},
title = {Off-the-Record Communication, or, Why Not to Use {PGP}},
booktitle = {Proceedings of the 2004 ACM Workshop on Privacy in the Electronic Society},
pages = {77--84},
year = {2004},
doi = {10.1145/1029179.1029200},
}
@inproceedings{douceur2002sybil,
author = {John R. Douceur},
title = {The Sybil Attack},
booktitle = {Peer-to-Peer Systems: First International Workshop (IPTPS 2002)},
pages = {251--260},
year = {2002},
publisher = {Springer},
doi = {10.1007/3-540-45748-8_24},
}
@inproceedings{meshtastic2023,
author = {{Meshtastic Project}},
title = {Meshtastic: Open Source Long Range Mesh Communicator},
year = {2023},
note = {Available at \url{https://meshtastic.org}},
}
@misc{reticulum2023,
author = {Mark Qvist},
title = {Reticulum: Cryptography-based networking for wide-area and local networks},
year = {2023},
note = {Available at \url{https://reticulum.network}},
}
@misc{briar2017,
author = {{Briar Project}},
title = {Briar: Secure Messaging, Anywhere},
year = {2017},
note = {Available at \url{https://briarproject.org}},
}
@inproceedings{danezis2003mixminion,
author = {George Danezis and Roger Dingledine and Nick Mathewson},
title = {Mixminion: Design of a Type {III} Anonymous Remailer Protocol},
booktitle = {IEEE Symposium on Security and Privacy},
pages = {2--15},
year = {2003},
doi = {10.1109/SECPRI.2003.1199323},
}
@article{bernstein2012chacha,
author = {Daniel J. Bernstein},
title = {The {ChaCha} family of stream ciphers},
year = {2008},
note = {Available at \url{https://cr.yp.to/chacha.html}},
}
@misc{sgbv2024,
title = {{Sozialgesetzbuch ({SGB}) F{\"u}nftes Buch -- Gesetzliche Krankenversicherung}},
note = {Sections 92, 95, 101. Available at \url{https://www.gesetze-im-internet.de/sgb_5/}},
year = {2024},
}
@misc{kbvarztsuche,
author = {{Kassenärztliche Bundesvereinigung}},
title = {{KBV} Arztsuche},
year = {2024},
note = {Available at \url{https://www.kbv.de/html/arztsuche.php}},
}
@misc{doctolib2024,
author = {{Doctolib GmbH}},
title = {Doctolib: Online-Terminbuchung},
year = {2024},
note = {Available at \url{https://www.doctolib.de}},
}
@misc{terminservice116117,
author = {{Kassenärztliche Bundesvereinigung}},
title = {Terminservicestellen der {KV} -- 116117},
year = {2024},
note = {Available at \url{https://www.116117.de}},
}
@article{mandl2007indivo,
author = {Kenneth D. Mandl and Isaac S. Kohane},
title = {Tectonic Shifts in the Health Information Economy},
journal = {New England Journal of Medicine},
volume = {358},
number = {16},
pages = {1732--1737},
year = {2008},
doi = {10.1056/NEJMsb0800220},
}
@inproceedings{benet2014ipfs,
author = {Juan Benet},
title = {{IPFS} -- Content Addressed, Versioned, {P2P} File System},
year = {2014},
note = {arXiv preprint arXiv:1407.3561},
}
@inproceedings{hkdf2010krawczyk,
author = {Hugo Krawczyk},
title = {Cryptographic Extraction and Key Derivation: The {HKDF} Scheme},
booktitle = {Advances in Cryptology -- CRYPTO 2010},
pages = {631--648},
year = {2010},
publisher = {Springer},
doi = {10.1007/978-3-642-14623-7_34},
}
@article{dgppn2019leitlinie,
author = {{DGPPN}},
title = {S3-Leitlinie Psychosoziale Therapien bei schweren psychischen Erkrankungen},
journal = {AWMF-Register},
year = {2019},
note = {Available at \url{https://www.awmf.org}},
}
@article{who2022mental,
author = {{World Health Organization}},
title = {World Mental Health Report: Transforming Mental Health for All},
year = {2022},
note = {Available at \url{https://www.who.int}},
}

926
paper/fapp.tex Normal file
View File

@@ -0,0 +1,926 @@
\documentclass[11pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[margin=2.5cm]{geometry}
\usepackage{amsmath,amssymb}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage{hyperref}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{enumitem}
\usepackage{float}
\usepackage{url}
\hypersetup{
colorlinks=true,
linkcolor=blue!60!black,
citecolor=green!50!black,
urlcolor=blue!70!black,
}
\lstset{
basicstyle=\ttfamily\small,
breaklines=true,
frame=single,
framerule=0.4pt,
rulecolor=\color{gray!50},
backgroundcolor=\color{gray!5},
numbers=left,
numberstyle=\tiny\color{gray},
numbersep=6pt,
columns=fullflexible,
keepspaces=true,
xleftmargin=1.5em,
xrightmargin=0.5em,
}
\newcommand{\fapp}{\textsc{Fapp}}
\newcommand{\qpq}{\textsc{QuicProQuo}}
\newcommand{\cbor}{\textsc{Cbor}}
\title{\textbf{FAPP: A Privacy-Preserving Decentralized Protocol\\for Psychotherapy Appointment Discovery}}
\author{
Christian Nennemann\\
Independent Researcher\\
\texttt{write@nennemann.de}
}
\date{April 2026}
\begin{document}
\maketitle
\begin{abstract}
In Germany, patients seeking psychotherapy face wait times of three to six months,
driven in part by structural opacity in the appointment allocation system of the
\emph{Kassenärztliche Vereinigung} (KV). We present FAPP (Free Appointment
Propagation Protocol), a decentralized protocol that enables licensed
psychotherapists to announce free appointment slots into a mesh network, where
patients can discover and reserve them anonymously. FAPP implements an
\emph{asymmetric privacy model}: therapist identities are public and
cryptographically bound to their professional license (Approbation), while
patient queries carry no identifying information. Reservations are end-to-end
encrypted using X25519 Diffie--Hellman key agreement with ChaCha20-Poly1305
authenticated encryption, ensuring that only the intended therapist can read
patient contact information. The protocol is transport-agnostic, supporting
QUIC, TCP, and LoRa links through the \qpq{} mesh networking stack. We
describe the protocol design, analyze its security properties against a
realistic adversary model grounded in German healthcare regulation, and
discuss deployment considerations for a real-world pilot.
\end{abstract}
\medskip
\noindent\textbf{Keywords:} decentralized healthcare, privacy-preserving discovery, mesh networking, psychotherapy access, appointment scheduling
% ===========================================================================
\section{Introduction}
\label{sec:intro}
% ===========================================================================
Mental disorders affect approximately 27.8\% of the adult population in Germany
in any given year~\cite{jacobi2014psychische}, yet the infrastructure for
connecting patients with psychotherapists remains rooted in centralized,
opaque systems. The Kassenärztliche Vereinigung (KV)---the self-governing
body of statutory health insurance physicians---operates a \emph{Terminservicestelle}
(appointment referral service) reachable via the national number
116117~\cite{terminservice116117}. Studies by the Bundespsychotherapeutenkammer
(BPtK) consistently report average wait times of three to six months between
initial contact and the first therapeutic session~\cite{bpt2022wartezeiten},
with the situation worsening for child and adolescent psychotherapy and in
rural regions.
The structural problem is one of \emph{visibility}. A therapist with a free
50-minute slot next Tuesday has no efficient channel to make this slot
discoverable to the patients who need it. The KV's 116117 system operates
on a referral basis with limited real-time slot data. Commercial platforms
such as Doctolib~\cite{doctolib2024} aggregate some appointment data but
require therapists to opt in, charge fees, and track patient search
behavior~\cite{bpt2024versorgung}. The KBV's own physician search
portal~\cite{kbvarztsuche} provides practice information but not real-time
slot availability. None of these systems allow patients to search
\emph{anonymously}---a property of particular importance in mental health,
where the mere act of searching for a therapist can carry stigma.
We propose FAPP (Free Appointment Propagation Protocol), a decentralized
protocol designed to address this specific gap. FAPP operates over the
\qpq{} mesh network~\cite{rfc9000}, enabling therapists to announce free
appointment slots and patients to discover them without any central server,
registration, or identity disclosure. The protocol enforces an
\emph{asymmetric privacy model}: therapists, as licensed professionals
(\emph{Approbation}, regulated under SGB~V \S\S~92, 95~\cite{sgbv2024}),
operate with public, verifiable identities, while patients enjoy query-level
anonymity. Reservation messages are end-to-end encrypted so that
intermediary mesh nodes cannot observe patient contact information.
The contributions of this paper are:
\begin{enumerate}[nosep]
\item A complete protocol specification for decentralized, privacy-preserving
appointment discovery tailored to the German psychotherapy system
(Section~\ref{sec:protocol}).
\item An asymmetric privacy model with formal threat analysis grounded in
German healthcare regulation (Sections~\ref{sec:threat} and~\ref{sec:security}).
\item A transport-agnostic design that operates over QUIC, TCP, and LoRa
mesh links (Section~\ref{sec:transport}).
\item An open-source reference implementation in Rust with 222~passing
tests, including 31~FAPP-specific integration tests.
\end{enumerate}
% ===========================================================================
\section{Related Work}
\label{sec:related}
% ===========================================================================
\paragraph{Centralized appointment platforms.}
Doctolib~\cite{doctolib2024} is the dominant commercial appointment platform
in Germany and France, offering real-time booking for physicians including
some psychotherapists. The KV's 116117 Terminservicestelle~\cite{terminservice116117}
provides telephone and online appointment referral mandated by
SGB~V~\S~75. Both systems are centralized: they require therapists to
register, maintain a server-side database of slots, and---critically---record
patient search queries, creating a correlation between identity and mental
health need. FAPP differs fundamentally by eliminating the central database and
enabling anonymous discovery.
\paragraph{Decentralized healthcare data systems.}
Research on patient-controlled health records~\cite{mandl2007indivo} has
explored decentralized architectures where patients hold their own data.
Content-addressed storage systems like IPFS~\cite{benet2014ipfs} have been
proposed for medical record sharing. However, these focus on record
\emph{storage} rather than real-time \emph{service discovery}, and none
address the specific problem of appointment slot propagation in a
privacy-preserving manner.
\paragraph{Mesh networking for constrained environments.}
Meshtastic~\cite{meshtastic2023} provides LoRa-based mesh networking for
text messaging with basic encryption. Reticulum~\cite{reticulum2023}
offers a cryptographic networking stack supporting multiple transport
layers including LoRa, with a focus on resilience. Briar~\cite{briar2017}
implements delay-tolerant, peer-to-peer messaging with Tor integration for
censorship resistance. FAPP draws architectural inspiration from these
systems---particularly Reticulum's transport abstraction and Briar's
store-and-forward model---but adds domain-specific semantics for appointment
discovery, structured query matching, and a therapist verification framework
absent from general-purpose mesh protocols.
\paragraph{Privacy-preserving discovery.}
Anonymous communication systems, from onion routing~\cite{goldschlag1996onion}
to Mixminion~\cite{danezis2003mixminion}, provide sender anonymity at the
network layer. Off-the-Record messaging~\cite{borisov2004offrecord} achieves
deniability and forward secrecy in point-to-point communication.
MLS~\cite{rfc9420} extends these properties to group settings. FAPP's
privacy model is narrower but operationally distinct: rather than hiding
\emph{all} participants, it deliberately exposes therapist identity (as
required by professional regulation) while protecting patient anonymity.
This asymmetric model, while simpler than full anonymity systems, aligns
precisely with the regulatory and social requirements of psychotherapy
access.
% ===========================================================================
\section{Threat Model and Privacy Requirements}
\label{sec:threat}
% ===========================================================================
\subsection{Asymmetric Privacy Model}
FAPP's privacy model reflects the inherent asymmetry of the
therapist--patient relationship in German healthcare law:
\begin{description}[nosep,leftmargin=1.5em]
\item[Therapist identity is public.] Psychotherapists in Germany hold an
\emph{Approbation} (professional license) issued by the state health
authority. Their practice is listed in KV registries. FAPP
binds each therapist's mesh identity to their Approbation via a
SHA-256 hash of the credential number, creating accountability
without exposing the raw number to the mesh.
\item[Patient queries are anonymous.] A \texttt{SlotQuery} message
contains only search filters (specialization, insurance type, postal
code prefix, time range) and a random correlation ID. No patient
identity, device fingerprint, or return address is attached.
Only when a patient \emph{chooses} to reserve a slot does an encrypted
channel to the therapist emerge.
\end{description}
\subsection{Adversary Model}
We consider the following adversary capabilities:
\begin{enumerate}[nosep]
\item \textbf{Passive network observer.} An adversary who can observe all
mesh traffic on links they control. They can see message sizes, timing,
and CBOR-encoded (but not encrypted) FAPP frames for \texttt{SlotAnnounce}
and \texttt{SlotQuery} messages. They cannot observe the content of
\texttt{SlotReserve} or \texttt{SlotConfirm} payloads, which are
end-to-end encrypted.
\item \textbf{Malicious relay node.} A relay node with \texttt{CAP\_FAPP\_RELAY}
that faithfully participates in message propagation but attempts to
correlate queries with reservations or de-anonymize patients.
\item \textbf{Fake therapist.} An adversary who generates an Ed25519 keypair
and publishes \texttt{SlotAnnounce} messages with fabricated Approbation
hashes, attempting to collect patient contact data.
\item \textbf{Denial-of-service attacker.} An adversary who floods the mesh
with spurious \texttt{SlotAnnounce} or \texttt{SlotQuery} messages to
exhaust relay storage or bandwidth.
\end{enumerate}
We explicitly exclude the following from our threat model:
\begin{itemize}[nosep]
\item Global passive adversaries who observe all mesh links simultaneously.
\item Adversaries who compromise a therapist's long-term Ed25519 private key.
\item Physical-layer attacks on LoRa radio (jamming, direction finding).
\end{itemize}
\subsection{Legal Context}
The protocol operates within the German healthcare regulatory framework:
\begin{itemize}[nosep]
\item \textbf{Approbation} (PsychThG \S~1): Psychotherapists require a
state-issued license. FAPP's therapist verification levels are designed
to interoperate with this credential system.
\item \textbf{Bedarfsplanung} (SGB~V \S~101): Regional capacity planning
determines the number of licensed therapy seats per area. FAPP does
not circumvent this system; it improves the visibility of slots within
it.
\item \textbf{Patient data protection} (GDPR, BDSG): Patient search behavior
constitutes health-related personal data under GDPR Art.~9.
FAPP's anonymous query design avoids generating this data category
entirely---a property no centralized platform can offer.
\item \textbf{Fernbehandlung} (MBO-{\"A} \S~7): Telemedicine regulations
require an initial in-person contact for some therapy modalities.
FAPP's \texttt{Modalitaet} field distinguishes in-person, video, and
hybrid sessions, supporting compliance-aware search.
\end{itemize}
% ===========================================================================
\section{Protocol Design}
\label{sec:protocol}
% ===========================================================================
\subsection{Overview}
FAPP defines five message types that together implement a complete
appointment discovery and reservation lifecycle:
\begin{enumerate}[nosep]
\item \textbf{SlotAnnounce}: Therapist publishes available time slots.
\item \textbf{SlotQuery}: Patient searches for matching slots (anonymous).
\item \textbf{SlotResponse}: Relay or therapist returns matching results.
\item \textbf{SlotReserve}: Patient claims a slot (E2E encrypted to therapist).
\item \textbf{SlotConfirm}: Therapist confirms or rejects the reservation.
\end{enumerate}
\noindent The first three messages are \emph{cleartext} within the mesh (though
protected by transport-layer encryption on each hop). The last two carry
end-to-end encrypted payloads that intermediary nodes cannot read.
\subsection{Capability Flags}
FAPP extends the mesh announce protocol's capability bitfield with three
flags that allow nodes to declare their role:
\begin{center}
\begin{tabular}{llp{7cm}}
\toprule
\textbf{Flag} & \textbf{Value} & \textbf{Semantics} \\
\midrule
\texttt{CAP\_FAPP\_THERAPIST} & \texttt{0x0100} & Node is a licensed
therapist that publishes \texttt{SlotAnnounce} messages. \\
\texttt{CAP\_FAPP\_RELAY} & \texttt{0x0200} & Node caches
\texttt{SlotAnnounce}s and answers \texttt{SlotQuery} messages from
its local store. \\
\texttt{CAP\_FAPP\_PATIENT} & \texttt{0x0400} & Node can issue anonymous
\texttt{SlotQuery} and \texttt{SlotReserve} messages. \\
\bottomrule
\end{tabular}
\end{center}
\noindent A single node may combine flags---for example, a relay operated by
a patient advocacy group would set both \texttt{CAP\_FAPP\_RELAY} and
\texttt{CAP\_FAPP\_PATIENT}.
\subsection{Message Specifications}
\subsubsection{SlotAnnounce}
A \texttt{SlotAnnounce} carries the therapist's available time slots
along with metadata needed for discovery and verification. Its fields
are:
\begin{itemize}[nosep]
\item \texttt{id}: 16-byte unique identifier, derived as
$\texttt{SHA-256}(\texttt{therapist\_address} \| \texttt{sequence})[0..16]$.
\item \texttt{therapist\_address}: 16-byte truncated mesh address,
computed as $\texttt{SHA-256}(\texttt{Ed25519\_pubkey})[0..16]$.
\item \texttt{fachrichtung}: List of therapy specializations
(\emph{Verhaltenstherapie}, \emph{Tiefenpsychologisch fundiert},
\emph{Analytisch}, \emph{Systemisch}, \emph{Kinder-/Jugend}).
\item \texttt{modalitaet}: Session modalities
(\emph{Praxis}, \emph{Video}, \emph{Hybrid}).
\item \texttt{kostentraeger}: Accepted insurance types
(\emph{GKV}, \emph{PKV}, \emph{Selbstzahler}).
\item \texttt{location\_hint}: Postal code (PLZ) only; never an exact address.
\item \texttt{slots}: Vector of \texttt{TimeSlot} records, each containing
\texttt{start\_unix} (Unix seconds), \texttt{duration\_minutes} (typically
50 or 25), and \texttt{slot\_type} (\emph{Erstgespräch},
\emph{Probatorik}, \emph{Therapie}, \emph{Akut}).
\item \texttt{approbation\_hash}: SHA-256 of the therapist's Approbation
number, binding the mesh identity to a real-world credential.
\item \texttt{profile\_url}: Optional URL to the therapist's public profile
(practice website, Jameda, KBV listing) for out-of-band verification.
\item \texttt{sequence}: Monotonically increasing counter per therapist,
used for deduplication and supersession of older announcements.
\item \texttt{ttl\_hours}: Time-to-live (default: 168 hours = 7 days).
\item \texttt{timestamp}: Unix seconds at creation.
\item \texttt{signature}: Ed25519 signature over all fields except
\texttt{signature} and \texttt{hop\_count}.
\item \texttt{hop\_count}, \texttt{max\_hops}: Current and maximum
propagation depth (default max: 8 hops).
\end{itemize}
The signature covers a deterministic byte serialization of all non-excluded
fields, using fixed-width enum indices and \texttt{0xFF} separators between
variable-length sections. Forwarding nodes increment \texttt{hop\_count}
without re-signing---a design shared with the underlying mesh announce
protocol.
\subsubsection{SlotQuery}
A \texttt{SlotQuery} enables patients to search for available slots without
revealing their identity:
\begin{itemize}[nosep]
\item \texttt{query\_id}: 16 random bytes for response correlation.
\item \texttt{fachrichtung}, \texttt{modalitaet}, \texttt{kostentraeger}:
Optional filters narrowing search results.
\item \texttt{plz\_prefix}: Optional postal code prefix (e.g.,
\texttt{"80"} for the Munich area), enabling geographic filtering
without revealing the patient's exact location.
\item \texttt{earliest}, \texttt{latest}: Optional Unix-second bounds
on acceptable appointment times.
\item \texttt{slot\_type}: Optional filter by appointment type.
\item \texttt{max\_results}: Maximum number of results requested.
\end{itemize}
\noindent No patient address, key, or identity material appears in the query.
The \texttt{query\_id} is random and single-use, providing no linkability
across queries.
\subsubsection{SlotResponse}
A \texttt{SlotResponse} contains the \texttt{query\_id} from the
originating query and a vector of matching \texttt{SlotAnnounce} records.
Full announce records are included so the patient can independently verify
each therapist's Ed25519 signature and Approbation hash binding.
\subsubsection{SlotReserve}
\label{sec:reserve}
When a patient selects a slot, they construct a \texttt{SlotReserve}
message containing:
\begin{itemize}[nosep]
\item \texttt{slot\_announce\_id}: Reference to the target
\texttt{SlotAnnounce}.
\item \texttt{slot\_index}: Index into the announce's slot vector.
\item \texttt{patient\_ephemeral\_key}: A fresh X25519 public key
generated for this reservation.
\item \texttt{encrypted\_contact}: Patient contact information, encrypted
to the therapist's X25519 public key (derived from their Ed25519
identity via standard birational mapping).
\end{itemize}
\noindent The encryption scheme is detailed in Section~\ref{sec:crypto}.
\subsubsection{SlotConfirm}
The therapist's response contains:
\begin{itemize}[nosep]
\item \texttt{slot\_announce\_id}, \texttt{slot\_index}: Identifies the
reserved slot.
\item \texttt{confirmed}: Boolean acceptance or rejection.
\item \texttt{encrypted\_details}: Appointment details (room, address,
instructions), encrypted to the patient's ephemeral key.
\item \texttt{therapist\_ephemeral\_key}: A fresh X25519 key generated for
this confirmation, providing forward secrecy.
\end{itemize}
\subsection{Cryptographic Construction}
\label{sec:crypto}
The E2E encryption for \texttt{SlotReserve} and \texttt{SlotConfirm}
follows a standard ECDH + KDF + AEAD pattern:
\paragraph{Key agreement.}
The patient generates an ephemeral X25519 keypair
$(sk_P, pk_P)$~\cite{rfc7748}. The therapist's X25519 public key $pk_T$
is derived from their Ed25519 identity key via the standard birational map.
The shared secret is computed as:
\[
ss = \text{X25519}(sk_P, pk_T)
\]
\paragraph{Key derivation.}
A 32-byte symmetric key is derived using HKDF-SHA256~\cite{rfc5869,hkdf2010krawczyk}:
\[
k = \text{HKDF-Expand}(ss, \texttt{"fapp-reserve-v1"}, 32)
\]
For confirmations, the context string is \texttt{"fapp-confirm-v1"} and
the therapist generates a fresh ephemeral keypair, ensuring forward
secrecy even if the therapist's long-term key is later compromised.
\paragraph{Authenticated encryption.}
Plaintext is encrypted with ChaCha20-Poly1305~\cite{rfc8439,bernstein2012chacha}
using a random 12-byte nonce. The ciphertext format is:
\[
\texttt{nonce}_{12} \| \texttt{ciphertext} \| \texttt{tag}_{16}
\]
This construction provides IND-CCA2 security under standard assumptions.
\subsection{Wire Format}
All FAPP messages are serialized with CBOR (Concise Binary Object
Representation, RFC~8949~\cite{rfc8949}), consistent with the \qpq{}
mesh envelope and announce formats. On the wire, each FAPP frame is
prefixed with a single-byte tag identifying the message type:
\begin{center}
\begin{tabular}{cl}
\toprule
\textbf{Tag} & \textbf{Message Type} \\
\midrule
\texttt{0x01} & \texttt{SlotAnnounce} \\
\texttt{0x02} & \texttt{SlotQuery} \\
\texttt{0x03} & \texttt{SlotResponse} \\
\texttt{0x04} & \texttt{SlotReserve} \\
\texttt{0x05} & \texttt{SlotConfirm} \\
\bottomrule
\end{tabular}
\end{center}
\noindent CBOR was chosen over Protocol Buffers or JSON for three reasons:
(1)~self-describing format requiring no schema negotiation, (2)~compact
binary encoding suitable for LoRa's constrained bandwidth, and (3)~existing
use throughout the \qpq{} mesh stack, avoiding a second serialization
dependency.
\subsection{Propagation Rules}
\texttt{SlotAnnounce} messages propagate via controlled flooding:
\begin{enumerate}[nosep]
\item A relay node receiving an announce checks \texttt{hop\_count} $<$
\texttt{max\_hops} and \texttt{timestamp} + \texttt{ttl\_hours} $>$
current time. Failing either check, the message is dropped.
\item The announce is deduplicated against a bounded set of seen IDs
(capacity: 50{,}000). Duplicate IDs are silently dropped.
\item Sequence-based supersession: if the relay has seen a higher
\texttt{sequence} from the same \texttt{therapist\_address}, the
incoming announce is rejected.
\item If the relay has the therapist's public key, the Ed25519 signature
is verified. Invalid signatures cause immediate rejection.
\item The announce is stored in the relay's \texttt{FappStore} (bounded
to 10{,}000 total entries and 50 per therapist) and re-broadcast with
\texttt{hop\_count} incremented.
\end{enumerate}
\texttt{SlotQuery} messages propagate similarly but with shorter effective
TTLs. Relay nodes that hold matching \texttt{SlotAnnounce} records in
their local store respond directly, reducing query propagation depth.
% ===========================================================================
\section{Mesh Transport Integration}
\label{sec:transport}
% ===========================================================================
FAPP is transport-agnostic by design. It produces and consumes byte
frames; the underlying \qpq{} mesh stack handles routing, fragmentation,
and transport selection.
\subsection{Transport Layer Architecture}
The \qpq{} mesh provides three transport backends through a unified
\texttt{TransportManager} abstraction:
\begin{description}[nosep,leftmargin=1.5em]
\item[QUIC (primary).] QUIC over UDP~\cite{rfc9000} with TLS~1.3 mutual
authentication. Used for high-bandwidth links between nodes with
internet connectivity. Each mesh connection uses the ALPN identifier
\texttt{quicprochat/mesh/1}.
\item[TCP (fallback).] Length-prefixed TCP streams for environments where
UDP is blocked or NAT traversal fails. Provides reliable, ordered
delivery at the cost of head-of-line blocking.
\item[LoRa (constrained).] Sub-GHz radio links using LoRa modulation
(EU868 band)~\cite{lora2015semtech} for infrastructure-independent
operation. Subject to ETSI EN~300~220 duty cycle limits (1\% in the
868.0--868.6~MHz sub-band)~\cite{eu868dutycycle}.
\end{description}
\noindent The \texttt{TransportManager} selects the transport based on the
destination address type and provides automatic capability classification
(Unconstrained, Medium, Constrained, Severely\-Constrained) that influences
cryptographic mode selection.
\subsection{Hop-Based Propagation}
FAPP messages propagate through the mesh as payloads inside
\texttt{Mesh\-Envelope} containers. Each envelope carries:
\begin{itemize}[nosep]
\item Source and destination 16-byte truncated addresses.
\item TTL counter decremented at each hop.
\item Ed25519 signature (for authenticity, not confidentiality).
\item Nonce for replay detection.
\end{itemize}
\noindent The mesh router maintains a \texttt{RoutingTable} with entries
learned from periodic \texttt{MeshAnnounce} messages. For FAPP's flooding
pattern, outbound frames are sent to all known next-hop addresses
(\emph{flood fan-out}).
\subsection{Deduplication and Store-and-Forward}
Deduplication operates at two levels:
\begin{enumerate}[nosep]
\item \textbf{Envelope level.} The mesh router tracks seen envelope nonces
in a bounded set, preventing the same envelope from being forwarded
twice.
\item \textbf{FAPP level.} The \texttt{FappStore} tracks seen announce IDs
(bounded to 50{,}000 entries with FIFO eviction) and per-therapist
sequence numbers. An announce with a sequence number lower than the
last seen value for that therapist is rejected immediately.
\end{enumerate}
\noindent Store-and-forward is handled by the \texttt{MeshStore}, which queues
messages for offline recipients and delivers them upon reconnection. This
is particularly relevant for therapist nodes that may only be online during
practice hours.
\subsection{Location Hints and PLZ-Based Filtering}
FAPP uses German postal codes (PLZ) as coarse location hints. The
five-digit PLZ system provides geographic granularity at the city or
district level without revealing exact addresses. Query-time filtering
on PLZ prefixes allows geographic scoping:
\begin{itemize}[nosep]
\item \texttt{"8"}: all of Bavaria and parts of Baden-Württemberg.
\item \texttt{"80"}: Munich metropolitan area.
\item \texttt{"803"}: central Munich districts.
\end{itemize}
\noindent This prefix-based approach lets patients control the trade-off between
geographic precision and result volume without disclosing their own
location.
\subsection{LoRa Considerations}
LoRa links impose severe bandwidth constraints. At SF12/BW125 (the
most resilient configuration), the effective payload per frame is
approximately 51 bytes~\cite{lora2015semtech}. Measured FAPP message
sizes in the reference implementation are:
\begin{center}
\begin{tabular}{lrl}
\toprule
\textbf{Message} & \textbf{CBOR Size} & \textbf{SF12 Fragments} \\
\midrule
\texttt{SlotAnnounce} (2 slots) & $\sim$320 bytes & 7 \\
\texttt{SlotQuery} (all filters) & $\sim$90 bytes & 2 \\
\texttt{SlotReserve} & $\sim$110 bytes & 3 \\
\texttt{SlotConfirm} & $\sim$100 bytes & 2 \\
\bottomrule
\end{tabular}
\end{center}
\noindent The \qpq{} LoRa transport handles fragmentation and reassembly
transparently, with a \texttt{DutyCycleTracker} enforcing EU868 1\%
duty cycle compliance. At SF12, transmitting a full \texttt{SlotAnnounce}
takes approximately 14 seconds of airtime, consuming roughly 0.4\% of the
hourly duty budget. This is viable for low-frequency announcements but
precludes real-time query--response interactions over LoRa alone.
A practical deployment would use LoRa for announce propagation in
areas without internet connectivity, with queries flowing over
QUIC or TCP where available.
% ===========================================================================
\section{Security Analysis}
\label{sec:security}
% ===========================================================================
\subsection{Patient Anonymity}
\texttt{SlotQuery} messages contain no patient-identifying information:
no return address, no public key, no device fingerprint. The
\texttt{query\_id} is a random 16-byte value generated per query,
providing no cross-query linkability.
\emph{Limitation:} In the current design, a relay node can observe
\emph{which incoming link} a query arrived on, potentially correlating
it with a directly connected patient node. Mitigations include
multi-hop query forwarding (where intermediate nodes strip source
information) and cover traffic. The return path for responses is
discussed as future work in Section~\ref{sec:future}.
\subsection{Therapist Verification}
\label{sec:verification}
FAPP provides three verification levels for therapist identity:
\begin{description}[nosep,leftmargin=1.5em]
\item[Level 0: Mesh signature only.]
The therapist's \texttt{SlotAnnounce} is signed with their Ed25519 key.
This proves control of the corresponding mesh identity but does not bind
it to a real-world person. The \texttt{approbation\_hash} field
(SHA-256 of the Approbation number) creates a commitment but is not
independently verifiable at this level, since an attacker could
fabricate a hash.
\item[Level 1: Endorsement by trusted relays.]
Trusted relay nodes---operated, for example, by patient advocacy
organizations (\emph{Unabhängige Patientenberatung})---can sign
\texttt{Endorsement} records attesting to a therapist's identity after
out-of-band verification. This creates a web-of-trust model where
patients can filter by endorser reputation.
\item[Level 2: Registry verification.]
A gateway node queries the KBV physician registry using the therapist's
\emph{Lebenslange Arztnummer} (LANR) and signs an attestation binding
the mesh identity to the registry entry. This provides the highest
assurance but requires infrastructure for registry access.
\end{description}
\noindent The current reference implementation operates at Level~0 with
a \texttt{profile\_url} field enabling manual cross-verification. The
client UI displays prominent warnings for unverified therapists.
\subsection{Denial of Service}
FAPP employs several mechanisms to resist denial-of-service attacks:
\begin{enumerate}[nosep]
\item \textbf{Rate limiting.} Relay nodes enforce a maximum of 10
\texttt{SlotAnnounce} messages per hour per \texttt{therapist\_address}
using a sliding-window rate limiter.
\item \textbf{Capacity bounds.} The \texttt{FappStore} limits total
cached announcements to 10{,}000 and per-therapist announcements to 50,
with oldest-first eviction.
\item \textbf{Hop limits.} The \texttt{max\_hops} field (default: 8)
bounds propagation depth, preventing amplification attacks.
\item \textbf{TTL enforcement.} Expired announcements (\texttt{timestamp}
+ \texttt{ttl\_hours} $\times$ 3600 < current time) are dropped on
receipt and garbage-collected from stores periodically.
\item \textbf{Backpressure.} The mesh layer's \texttt{BackpressureController}
implements priority-based load shedding, preferring to drop low-priority
traffic (queries from unknown peers) before high-priority traffic
(announces from verified therapists).
\end{enumerate}
\subsection{Sybil Resistance}
\label{sec:sybil}
The Sybil attack~\cite{douceur2002sybil}---where an adversary creates
many pseudonymous identities---is a concern for FAPP in two contexts:
\begin{description}[nosep,leftmargin=1.5em]
\item[Fake therapists.] An attacker generates multiple Ed25519 keypairs
and publishes \texttt{SlotAnnounce} messages from each.
\emph{Mitigation:} The \texttt{approbation\_hash} field forces the
attacker to commit to a credential number per identity. While
fabricating hashes is trivial, each fabricated identity is
independently rate-limited and consumes the attacker's store
budget. Level~1 and Level~2 verification (Section~\ref{sec:verification})
provide progressively stronger Sybil resistance by requiring
out-of-band identity binding.
\item[Fake relay nodes.] An attacker operates many relay nodes to
observe traffic patterns.
\emph{Mitigation:} FAPP's flooding model means all relays see
approximately the same traffic; additional Sybil relays gain no
information advantage beyond what a single relay provides. For
point-to-point messages (\texttt{SlotReserve}, \texttt{SlotConfirm}),
E2E encryption ensures that even colluding relays cannot read
content.
\end{description}
\subsection{Slot Squatting}
An adversary could attempt to reserve all announced slots to deny
service to legitimate patients. Since \texttt{SlotReserve} messages are
E2E encrypted, the therapist must decrypt and process each reservation
individually. Mitigations include:
\begin{itemize}[nosep]
\item Therapists can reject suspicious reservations via
\texttt{SlotConfirm} with \texttt{confirmed = false}.
\item Rate limiting on \texttt{SlotReserve} per therapist (enforced at
the therapist node).
\item The patient must provide genuine contact information (encrypted)
for the reservation to be actionable; a therapist who cannot reach the
patient can cancel and re-announce the slot.
\end{itemize}
\subsection{Replay Protection}
Replay attacks are mitigated at two levels:
\begin{enumerate}[nosep]
\item \textbf{Announce deduplication.} The \texttt{(therapist\_address,
sequence)} pair uniquely identifies each announce version. A replayed
announce with a sequence number already seen or lower than the latest is
rejected.
\item \textbf{Envelope nonces.} The mesh envelope layer uses random nonces
tracked in a bounded seen-set, preventing replay of the transport
container.
\item \textbf{TTL expiry.} Even if a dedup cache is evicted, the
\texttt{timestamp} + \texttt{ttl\_hours} check prevents acceptance of
stale announces.
\end{enumerate}
% ===========================================================================
\section{Discussion}
\label{sec:discussion}
% ===========================================================================
\subsection{Comparison with Centralized Alternatives}
\begin{table}[t]
\centering
\caption{Comparison of psychotherapy appointment systems.}
\label{tab:comparison}
\begin{tabularx}{\textwidth}{lccccX}
\toprule
& \textbf{Real-time} & \textbf{Patient} & \textbf{Decen-} &
\textbf{Verifi-} & \\
\textbf{System} & \textbf{slots} & \textbf{anon.} & \textbf{tralized} &
\textbf{cation} & \textbf{Notes} \\
\midrule
116117~\cite{terminservice116117} & Partial & No & No & Official &
Telephone/web; limited slot data; identity required for referral. \\
Doctolib~\cite{doctolib2024} & Yes & No & No & Self-report &
Tracks search behavior; therapist opt-in required; commercial fees. \\
KBV Arztsuche~\cite{kbvarztsuche} & No & Partial & No & Official &
Practice info only; no real-time availability. \\
FAPP (Level~0) & Yes & Yes & Yes & Mesh sig. &
Anonymous search; no infrastructure; limited identity assurance. \\
FAPP (Level~2) & Yes & Yes & Yes & Registry &
Requires trusted gateway; strongest guarantees. \\
\bottomrule
\end{tabularx}
\end{table}
Table~\ref{tab:comparison} summarizes the trade-offs. FAPP is the only system
that offers both real-time slot visibility and patient anonymity. This
comes at the cost of weaker therapist verification at Level~0, which is
an explicit design trade-off: we prioritize patient privacy and system
availability over centralized credential checking, with a planned
upgrade path to registry-backed verification.
\subsection{Deployment Challenges}
\paragraph{Therapist adoption.}
FAPP requires therapists to run mesh node software and actively manage
their slot announcements. While the protocol is designed for automation
(a background daemon can publish slots from the practice management
system), adoption depends on therapists perceiving the system as
lower-friction than existing alternatives. Integration with established
PVS (Praxisverwaltungssoftware) systems is essential for adoption.
\paragraph{Network bootstrapping.}
A mesh network requires a critical mass of relay nodes to provide
adequate coverage. Initial deployment can leverage existing \qpq{}
infrastructure (the messenger's server-to-server federation provides
seed connectivity), but sustained operation benefits from dedicated
relay nodes at healthcare institutions, patient advocacy organizations,
or community networks.
\paragraph{Key management.}
Therapists must protect their Ed25519 private key, which serves as
both their mesh identity and the anchor for their professional
reputation. Key compromise requires generating a new identity and
re-establishing verification, analogous to certificate revocation in
PKI systems. The \qpq{} key transparency module provides Merkle-log
based revocation, but its integration with FAPP is ongoing work.
\subsection{Regulatory Considerations}
FAPP does not replace or circumvent the KV's appointment allocation
system. It operates as a complementary discovery layer: therapists
who have unfilled slots can announce them through the mesh in addition
to reporting them through official channels. Since FAPP does not
handle billing, prescriptions, or clinical data, it falls outside the
scope of Telematikinfrastruktur (TI) certification requirements.
Patient anonymity aligns with GDPR's data minimization principle
(Art.~5(1)(c)): by not collecting or processing patient identity data
during the search phase, FAPP avoids creating the health-related personal
data that centralized platforms inevitably generate.
\subsection{LoRa Constraints and Hybrid Deployment}
Pure LoRa deployment is impractical for interactive query--response
patterns due to duty cycle constraints and high latency. A realistic
deployment uses LoRa for \emph{announce propagation} in connectivity
gaps (rural areas, community mesh networks) while routing queries
and reservations over internet-connected transports. The \qpq{}
\texttt{TransportManager} handles this routing transparently:
a relay node connected to both LoRa and TCP will bridge announces
between networks without application-layer awareness.
% ===========================================================================
\section{Conclusion and Future Work}
\label{sec:future}
% ===========================================================================
FAPP demonstrates that privacy-preserving appointment discovery is
achievable in a decentralized architecture without sacrificing the
verifiability requirements of a regulated healthcare profession.
The asymmetric privacy model---public therapist, anonymous patient---is
not merely a technical design choice but a reflection of the social
contract underlying psychotherapy: the professional is accountable,
the patient is protected.
The reference implementation in Rust, comprising approximately 1{,}600
lines of protocol code with 31 dedicated tests and full E2E encryption
support, validates the design's feasibility. CBOR serialization keeps
message sizes within LoRa fragmentation budgets, and the integration
with \qpq{}'s multi-transport mesh stack demonstrates that
a single protocol can operate across QUIC, TCP, and radio links.
Several directions remain for future work:
\paragraph{Anonymous return paths.}
The current design lacks a robust mechanism for routing
\texttt{SlotResponse} messages back to anonymous query originators.
The \texttt{SlotQuery} specification includes a \texttt{return\_path}
field for onion-style routing~\cite{goldschlag1996onion}, where each
hop in the return path is encrypted to the respective relay's key,
but this is not yet implemented. Realizing this would provide
Mixminion-style~\cite{danezis2003mixminion} unlinkability between
queries and their originators.
\paragraph{Multi-hop privacy for reservations.}
\texttt{SlotReserve} messages are currently E2E encrypted but routed
by flooding, which reveals the approximate network location of the
originator to neighboring nodes. A circuit-based routing scheme,
where the patient establishes a multi-hop tunnel before sending the
reservation, would provide stronger traffic analysis resistance.
\paragraph{E2E encrypted channels.}
After a successful reservation, the therapist and patient could
establish a persistent MLS~\cite{rfc9420} session through the mesh
for ongoing communication (appointment changes, intake forms).
The \qpq{} stack already supports MLS group key agreement; bridging
FAPP's ephemeral key exchange to a durable MLS session is a natural
extension.
\paragraph{Endorsement gossip protocol.}
Level~1 verification (Section~\ref{sec:verification}) requires a gossip
protocol for distributing and aggregating endorsements from trusted
relays. This protocol must resist endorsement inflation (where
colluding nodes endorse each other) while remaining lightweight
enough for constrained transports.
\paragraph{Real-world pilot.}
We plan a pilot deployment in a German metropolitan area, partnering
with a small group of psychotherapists willing to announce slots
through the mesh alongside their existing booking channels. The
pilot will measure (a)~slot discovery latency, (b)~relay network
coverage requirements, and (c)~therapist and patient usability
perceptions. Lessons from this pilot will inform protocol revisions
and inform regulatory engagement with the relevant KV.
\paragraph{Post-quantum key exchange.}
The \qpq{} mesh stack supports a hybrid X25519 + ML-KEM-768 key
encapsulation mechanism at the envelope level. Integrating post-quantum
key exchange into FAPP's reservation encryption would future-proof
patient contact data against quantum adversaries, though the increased
message sizes (approximately 2{,}676 bytes for a PQ-hybrid KeyPackage
versus 306 bytes for classical) make this impractical on LoRa links
with current duty cycle budgets.
\bigskip
\noindent The source code, protocol specification, and integration tests
are available at the \qpq{} project repository under the MIT license.
\bibliographystyle{plain}
\bibliography{fapp-refs}
\end{document}

View File

@@ -27,3 +27,12 @@ message DownloadBlobResponse {
uint64 total_size = 2; uint64 total_size = 2;
string mime_type = 3; string mime_type = 3;
} }
// Method ID: 602
message DeleteBlobRequest {
bytes blob_id = 1;
}
message DeleteBlobResponse {
bool deleted = 1;
}

14
viz/bridge/Cargo.toml Normal file
View File

@@ -0,0 +1,14 @@
[package]
name = "mesh-viz-bridge"
version = "0.1.0"
edition = "2021"
description = "WebSocket bridge: tails NDJSON mesh viz events to browser clients"
license = "Apache-2.0 OR MIT"
[dependencies]
anyhow = "1"
clap = { version = "4", features = ["derive"] }
futures-util = "0.3"
serde_json = "1"
tokio = { version = "1", features = ["macros", "rt-multi-thread", "signal", "time", "fs", "io-util", "net", "sync"] }
tokio-tungstenite = "0.26"

250
viz/bridge/src/main.rs Normal file
View File

@@ -0,0 +1,250 @@
//! Broadcasts newline-delimited JSON mesh events to all connected WebSocket clients.
//!
//! Sources:
//! - `--demo`: synthetic topology + hops (no file needed)
//! - `--file`: poll a JSONL file for appended lines (e.g. written by `QPC_MESH_VIZ_LOG`)
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::Arc;
use clap::Parser;
use futures_util::{SinkExt, StreamExt};
use tokio::net::{TcpListener, TcpStream};
use tokio::sync::broadcast;
use tokio_tungstenite::tungstenite::Message;
#[derive(Parser, Debug)]
#[command(name = "mesh-viz-bridge")]
struct Args {
/// Listen address (WebSocket upgrade is raw TCP; use mesh-graph.html connect URL).
#[arg(long, default_value = "127.0.0.1:8765")]
listen: String,
/// Poll this file for new NDJSON lines (append-only).
#[arg(long)]
file: Option<PathBuf>,
/// Emit synthetic events for UI development.
#[arg(long)]
demo: bool,
/// Milliseconds between file polls when using `--file`.
#[arg(long, default_value = "250")]
poll_ms: u64,
/// Milliseconds between demo events.
#[arg(long, default_value = "900")]
demo_interval_ms: u64,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
if args.file.is_some() && args.demo {
eprintln!("Use either --file or --demo, not both. Preferring --file.");
}
let (tx, _rx) = broadcast::channel::<String>(256);
let tx = Arc::new(tx);
if args.demo && args.file.is_none() {
let txd = Arc::clone(&tx);
let interval = args.demo_interval_ms;
tokio::spawn(async move {
demo_loop(txd, interval).await;
});
} else if let Some(ref path) = args.file {
let path = path.clone();
let txf = Arc::clone(&tx);
let poll = args.poll_ms;
tokio::spawn(async move {
tail_file_loop(path, txf, poll).await;
});
} else {
eprintln!("No --file or --demo: only WebSocket clients that receive externally pushed data would work.");
eprintln!("Start with: mesh-viz-bridge --demo OR mesh-viz-bridge --file ./mesh-viz-events.jsonl");
}
let listener = TcpListener::bind(&args.listen).await?;
eprintln!("mesh-viz-bridge WebSocket listening on ws://{}", args.listen);
loop {
let (stream, addr) = listener.accept().await?;
let txc = Arc::clone(&tx);
tokio::spawn(async move {
if let Err(e) = handle_client(stream, txc).await {
eprintln!("client {} error: {}", addr, e);
}
});
}
}
async fn handle_client(stream: TcpStream, tx: Arc<broadcast::Sender<String>>) -> anyhow::Result<()> {
let ws = tokio_tungstenite::accept_async(stream).await?;
let (mut write, mut read) = ws.split();
let mut rx = tx.subscribe();
loop {
tokio::select! {
msg = read.next() => {
match msg {
Some(Ok(Message::Close(_))) | None => break,
Some(Ok(Message::Ping(p))) => {
let _ = write.send(Message::Pong(p)).await;
}
Some(Err(e)) => return Err(e.into()),
_ => {}
}
}
line = rx.recv() => {
match line {
Ok(s) => write.send(Message::Text(s.into())).await?,
Err(broadcast::error::RecvError::Lagged(_)) => continue,
Err(broadcast::error::RecvError::Closed) => break,
}
}
}
}
Ok(())
}
async fn tail_file_loop(path: PathBuf, tx: Arc<broadcast::Sender<String>>, poll_ms: u64) {
let mut offset: u64 = 0;
loop {
match tokio::fs::File::open(&path).await {
Ok(file) => {
use tokio::io::{AsyncReadExt, AsyncSeekExt};
let mut file = file;
if let Ok(meta) = file.metadata().await {
let len = meta.len();
if len < offset {
offset = 0;
}
}
if file.seek(std::io::SeekFrom::Start(offset)).await.is_ok() {
let mut buf = Vec::new();
if file.read_to_end(&mut buf).await.is_ok() {
offset = match file.metadata().await {
Ok(m) => m.len(),
Err(_) => offset + buf.len() as u64,
};
let text = String::from_utf8_lossy(&buf);
for line in text.lines() {
let line = line.trim();
if line.is_empty() {
continue;
}
let _ = tx.send(line.to_string());
}
}
}
}
Err(_) => {
// Wait until file exists
}
}
tokio::time::sleep(std::time::Duration::from_millis(poll_ms)).await;
}
}
async fn demo_loop(tx: Arc<broadcast::Sender<String>>, interval_ms: u64) {
let nodes = [
("n1", "alpha", "active", 12u64),
("n2", "beta", "active", 18),
("n3", "gamma", "idle", 45),
("n4", "delta", "active", 22),
];
let mut tick: u64 = 0;
let mut present: HashSet<&'static str> = HashSet::new();
loop {
// Simulate join/leave
if tick % 14 == 0 {
present.clear();
present.insert("n1");
present.insert("n2");
} else if tick % 14 == 3 {
present.insert("n3");
} else if tick % 14 == 7 {
present.insert("n4");
} else if tick % 14 == 10 {
present.remove("n3");
} else if tick % 14 == 12 {
let _ = tx.send(
serde_json::json!({
"type": "node_status",
"id": "n2",
"status": "error",
"latency_ms": 999u64
})
.to_string(),
);
}
if tick % 14 != 12 {
let snap_nodes: Vec<_> = nodes
.iter()
.filter(|(id, _, _, _)| present.contains(id))
.map(|(id, label, status, lat)| {
serde_json::json!({
"id": id,
"label": label,
"status": status,
"latency_ms": lat
})
})
.collect();
let links: Vec<_> = {
let mut v = vec![];
if present.contains("n1") && present.contains("n2") {
v.push(serde_json::json!({"source": "n1", "target": "n2"}));
}
if present.contains("n2") && present.contains("n3") {
v.push(serde_json::json!({"source": "n2", "target": "n3"}));
}
if present.contains("n3") && present.contains("n4") {
v.push(serde_json::json!({"source": "n3", "target": "n4"}));
}
if present.contains("n2") && present.contains("n4") {
v.push(serde_json::json!({"source": "n2", "target": "n4"}));
}
v
};
let _ = tx.send(
serde_json::json!({
"type": "snapshot",
"nodes": snap_nodes,
"links": links
})
.to_string(),
);
}
// Message hop animation
let hop_pairs = [
("n1", "n2"),
("n2", "n3"),
("n2", "n4"),
("n3", "n4"),
];
let (a, b) = hop_pairs[(tick as usize) % hop_pairs.len()];
if present.contains(a) && present.contains(b) {
let ms = 8 + (tick % 40);
let _ = tx.send(
serde_json::json!({
"type": "hop",
"from": a,
"to": b,
"ms": ms
})
.to_string(),
);
}
tick = tick.wrapping_add(1);
tokio::time::sleep(std::time::Duration::from_millis(interval_ms)).await;
}
}

493
viz/mesh-graph.html Normal file
View File

@@ -0,0 +1,493 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>QuicProQuo mesh visualizer</title>
<script src="https://cdn.jsdelivr.net/npm/d3@7.9.0/dist/d3.min.js"></script>
<style>
:root {
--bg: #0f1419;
--panel: #1a2332;
--text: #e7ecf3;
--muted: #8b9cb3;
--edge: #3d4f66;
--active: #22c55e;
--idle: #eab308;
--error: #ef4444;
}
* { box-sizing: border-box; }
body {
margin: 0;
font-family: "JetBrains Mono", "Fira Code", ui-monospace, monospace;
background: var(--bg);
color: var(--text);
min-height: 100vh;
display: flex;
flex-direction: column;
}
header {
display: flex;
flex-wrap: wrap;
gap: 0.75rem;
align-items: center;
padding: 0.6rem 1rem;
background: var(--panel);
border-bottom: 1px solid #2a3544;
}
header h1 {
margin: 0;
font-size: 1rem;
font-weight: 600;
letter-spacing: 0.02em;
}
header .badge {
font-size: 0.7rem;
padding: 0.2rem 0.5rem;
border-radius: 4px;
background: #243044;
color: var(--muted);
}
header .badge.live { color: var(--active); }
header .badge.demo { color: var(--idle); }
header .badge.file { color: #38bdf8; }
label { font-size: 0.75rem; color: var(--muted); }
input[type="text"] {
width: 220px;
padding: 0.35rem 0.5rem;
border: 1px solid #2a3544;
border-radius: 4px;
background: var(--bg);
color: var(--text);
font-family: inherit;
font-size: 0.75rem;
}
button {
padding: 0.35rem 0.65rem;
border-radius: 4px;
border: 1px solid #3d4f66;
background: #243044;
color: var(--text);
font-family: inherit;
font-size: 0.75rem;
cursor: pointer;
}
button:hover { background: #2c3c55; }
button.primary { border-color: var(--active); color: var(--active); }
#chart-wrap {
flex: 1;
position: relative;
min-height: 400px;
}
svg#mesh {
width: 100%;
height: 100%;
display: block;
}
.links line {
stroke: var(--edge);
stroke-opacity: 0.65;
stroke-width: 1.5px;
}
.links line.hop-flash {
stroke: #7dd3fc;
stroke-width: 3px;
stroke-opacity: 1;
filter: drop-shadow(0 0 4px #38bdf8);
}
.nodes circle {
stroke: #1a2332;
stroke-width: 2px;
}
.nodes circle.status-active { fill: var(--active); }
.nodes circle.status-idle { fill: var(--idle); }
.nodes circle.status-error { fill: var(--error); }
.nodes text {
fill: var(--text);
font-size: 11px;
pointer-events: none;
text-shadow: 0 0 4px var(--bg), 0 0 6px var(--bg);
}
#tooltip {
position: fixed;
pointer-events: none;
z-index: 20;
background: rgba(26, 35, 50, 0.95);
border: 1px solid #3d4f66;
padding: 0.5rem 0.65rem;
border-radius: 6px;
font-size: 0.72rem;
max-width: 280px;
display: none;
}
#tooltip.visible { display: block; }
#log {
max-height: 88px;
overflow-y: auto;
font-size: 0.65rem;
color: var(--muted);
padding: 0.35rem 1rem;
border-top: 1px solid #2a3544;
background: #0c1016;
}
</style>
</head>
<body>
<header>
<h1>QuicProQuo mesh</h1>
<span id="mode-badge" class="badge">disconnected</span>
<label>WS <input id="ws-url" type="text" value="ws://127.0.0.1:8765" /></label>
<button type="button" id="btn-connect" class="primary">Connect</button>
<button type="button" id="btn-disconnect">Disconnect</button>
<button type="button" id="btn-demo">Demo mode</button>
<label style="display:flex;align-items:center;gap:0.35rem;">
<span>JSONL</span>
<input id="file-jsonl" type="file" accept=".jsonl,.ndjson,.json,.txt" />
</label>
</header>
<div id="chart-wrap">
<svg id="mesh"></svg>
<div id="tooltip"></div>
</div>
<div id="log"></div>
<script>
(function () {
let mode = "off"; // off | demo | ws | file
let ws = null;
let demoTimer = null;
let nodes = [];
let links = [];
let simulation = null;
let linkSel = null;
let nodeSel = null;
let labelSel = null;
const svg = d3.select("#mesh");
const tooltip = d3.select("#tooltip");
const logEl = document.getElementById("log");
const modeBadge = document.getElementById("mode-badge");
function log(msg) {
const t = new Date().toISOString().slice(11, 19);
logEl.textContent = `[${t}] ${msg}\n` + logEl.textContent.split("\n").slice(0, 12).join("\n");
}
function setMode(m) {
mode = m;
modeBadge.className = "badge";
if (m === "demo") { modeBadge.textContent = "demo"; modeBadge.classList.add("demo"); }
else if (m === "ws") { modeBadge.textContent = "live (WebSocket)"; modeBadge.classList.add("live"); }
else if (m === "file") { modeBadge.textContent = "file JSONL"; modeBadge.classList.add("file"); }
else { modeBadge.textContent = "disconnected"; }
}
function resize() {
const wrap = document.getElementById("chart-wrap");
const w = wrap.clientWidth;
const h = Math.max(400, window.innerHeight - wrap.offsetTop - 120);
svg.attr("width", w).attr("height", h);
if (simulation) {
simulation.force("center", d3.forceCenter(w / 2, h / 2));
simulation.alpha(0.35).restart();
}
}
function ensureSimulation() {
const w = +svg.attr("width") || 800;
const h = +svg.attr("height") || 500;
const root = svg.selectAll("g.root").data([0]).join("g").attr("class", "root");
const linkLayer = root.selectAll("g.links").data([0]).join("g").attr("class", "links");
const nodeLayer = root.selectAll("g.nodes").data([0]).join("g").attr("class", "nodes");
const labelLayer = root.selectAll("g.labels").data([0]).join("g").attr("class", "labels");
linkSel = linkLayer.selectAll("line");
nodeSel = nodeLayer.selectAll("circle");
labelSel = labelLayer.selectAll("text");
simulation = d3.forceSimulation(nodes)
.force("link", d3.forceLink(links).id(d => d.id).distance(90).strength(0.45))
.force("charge", d3.forceManyBody().strength(-220))
.force("center", d3.forceCenter(w / 2, h / 2))
.on("tick", () => {
linkSel
.attr("x1", d => d.source.x)
.attr("y1", d => d.source.y)
.attr("x2", d => d.target.x)
.attr("y2", d => d.target.y);
nodeSel.attr("cx", d => d.x).attr("cy", d => d.y);
labelSel.attr("x", d => d.x).attr("y", d => d.y + 4);
});
}
function syncGraph() {
if (!simulation) ensureSimulation();
linkSel = svg.select("g.links").selectAll("line")
.data(links, d => {
const s = d.source.id ?? d.source;
const t = d.target.id ?? d.target;
return `${s}${t}`;
});
linkSel.exit().remove();
const linkEnter = linkSel.enter().append("line");
linkSel = linkEnter.merge(linkSel);
nodeSel = svg.select("g.nodes").selectAll("circle")
.data(nodes, d => d.id);
nodeSel.exit()
.transition().duration(400)
.attr("r", 0)
.remove();
const nodeEnter = nodeSel.enter().append("circle")
.attr("r", 0)
.attr("class", d => `status-${d.status || "idle"}`)
.call(d3.drag()
.on("start", (ev, d) => {
if (!ev.active) simulation.alphaTarget(0.35).restart();
d.fx = d.x; d.fy = d.y;
})
.on("drag", (ev, d) => { d.fx = ev.x; d.fy = ev.y; })
.on("end", (ev, d) => {
if (!ev.active) simulation.alphaTarget(0);
d.fx = null; d.fy = null;
}));
nodeEnter.transition().duration(500).attr("r", 10);
nodeSel = nodeEnter.merge(nodeSel)
.attr("class", d => `status-${d.status || "idle"}`)
.on("mouseenter", (ev, d) => {
tooltip.classed("visible", true)
.html(`<strong>${escapeHtml(d.label || d.id)}</strong><br/>
id: ${escapeHtml(d.id)}<br/>
status: ${escapeHtml(d.status || "idle")}<br/>
latency: ${d.latency_ms != null ? d.latency_ms + " ms" : "—"}`);
})
.on("mousemove", (ev) => {
tooltip.style("left", (ev.clientX + 14) + "px").style("top", (ev.clientY + 10) + "px");
})
.on("mouseleave", () => tooltip.classed("visible", false));
labelSel = svg.select("g.labels").selectAll("text")
.data(nodes, d => d.id);
labelSel.exit().remove();
const labelEnter = labelSel.enter().append("text")
.attr("text-anchor", "middle")
.text(d => d.label || d.id.slice(0, 8));
labelSel = labelEnter.merge(labelSel).text(d => d.label || d.id.slice(0, 8));
simulation.nodes(nodes);
simulation.force("link").links(links);
simulation.alpha(1).restart();
}
function escapeHtml(s) {
return String(s).replace(/[&<>"']/g, c => ({ "&": "&amp;", "<": "&lt;", ">": "&gt;", '"': "&quot;", "'": "&#39;" }[c]));
}
function resolveLinkEnds(link) {
const sid = typeof link.source === "object" ? link.source.id : link.source;
const tid = typeof link.target === "object" ? link.target.id : link.target;
const s = nodes.find(n => n.id === sid);
const t = nodes.find(n => n.id === tid);
if (!s || !t) return null;
return { source: s, target: t };
}
function flashHop(fromId, toId) {
svg.select("g.links").selectAll("line").each(function (d) {
const sid = d.source.id ?? d.source;
const tid = d.target.id ?? d.target;
if ((sid === fromId && tid === toId) || (sid === toId && tid === fromId)) {
const el = d3.select(this);
el.classed("hop-flash", true);
setTimeout(() => el.classed("hop-flash", false), 420);
}
});
}
function applyEvent(obj) {
if (!obj || typeof obj.type !== "string") return;
if (obj.type === "snapshot") {
nodes = (obj.nodes || []).map(n => ({
id: n.id,
label: n.label || n.id,
status: n.status || "idle",
latency_ms: n.latency_ms
}));
const rawLinks = obj.links || [];
links = rawLinks
.map(L => resolveLinkEnds({ source: L.source, target: L.target }))
.filter(Boolean);
syncGraph();
return;
}
if (obj.type === "node_join") {
const i = nodes.findIndex(n => n.id === obj.id);
const rec = {
id: obj.id,
label: obj.label || obj.id,
status: obj.status || "active",
latency_ms: obj.latency_ms
};
if (i >= 0) nodes[i] = rec;
else nodes.push(rec);
syncGraph();
return;
}
if (obj.type === "node_leave") {
nodes = nodes.filter(n => n.id !== obj.id);
links = links.filter(l => {
const a = l.source.id || l.source;
const b = l.target.id || l.target;
return a !== obj.id && b !== obj.id;
});
syncGraph();
return;
}
if (obj.type === "node_status") {
const n = nodes.find(x => x.id === obj.id);
if (n) {
if (obj.status) n.status = obj.status;
if (obj.latency_ms != null) n.latency_ms = obj.latency_ms;
syncGraph();
}
return;
}
if (obj.type === "hop") {
flashHop(obj.from, obj.to);
return;
}
}
function handleLine(line) {
line = line.trim();
if (!line || line[0] === "#") return;
try {
applyEvent(JSON.parse(line));
} catch (e) {
log("bad JSON: " + line.slice(0, 80));
}
}
function stopDemo() {
if (demoTimer) {
clearInterval(demoTimer);
demoTimer = null;
}
}
function startDemo() {
stopDemo();
disconnectWs();
setMode("demo");
log("Demo mode: synthetic joins/leaves and hops");
let tick = 0;
const pool = [
{ id: "n1", label: "alpha", status: "active", latency_ms: 11 },
{ id: "n2", label: "beta", status: "active", latency_ms: 19 },
{ id: "n3", label: "gamma", status: "idle", latency_ms: 52 },
{ id: "n4", label: "delta", status: "active", latency_ms: 27 }
];
let present = new Set(["n1", "n2"]);
function emitSnapshot() {
const snapNodes = pool.filter(n => present.has(n.id));
const L = [];
if (present.has("n1") && present.has("n2")) L.push({ source: "n1", target: "n2" });
if (present.has("n2") && present.has("n3")) L.push({ source: "n2", target: "n3" });
if (present.has("n3") && present.has("n4")) L.push({ source: "n3", target: "n4" });
if (present.has("n2") && present.has("n4")) L.push({ source: "n2", target: "n4" });
applyEvent({ type: "snapshot", nodes: snapNodes, links: L });
}
emitSnapshot();
demoTimer = setInterval(() => {
tick++;
if (tick % 12 === 2) present.add("n3");
if (tick % 12 === 5) present.add("n4");
if (tick % 12 === 8) present.delete("n3");
if (tick % 12 === 10) {
applyEvent({ type: "node_status", id: "n2", status: "error", latency_ms: 800 });
} else if (tick % 12 === 11) {
applyEvent({ type: "node_status", id: "n2", status: "active", latency_ms: 19 });
}
emitSnapshot();
const pairs = [["n1", "n2"], ["n2", "n3"], ["n2", "n4"], ["n3", "n4"]];
const [a, b] = pairs[tick % pairs.length];
if (present.has(a) && present.has(b)) {
applyEvent({ type: "hop", from: a, to: b, ms: 10 + (tick % 35) });
}
}, 850);
}
function disconnectWs() {
if (ws) {
ws.close();
ws = null;
}
if (mode === "ws") setMode("off");
}
function connectWs() {
stopDemo();
disconnectWs();
const url = document.getElementById("ws-url").value.trim();
try {
ws = new WebSocket(url);
} catch (e) {
log("WebSocket error: " + e);
return;
}
setMode("ws");
ws.onopen = () => log("WebSocket open " + url);
ws.onclose = () => { log("WebSocket closed"); if (mode === "ws") setMode("off"); };
ws.onerror = () => log("WebSocket error");
ws.onmessage = (ev) => handleLine(ev.data);
}
document.getElementById("btn-connect").onclick = connectWs;
document.getElementById("btn-disconnect").onclick = () => { stopDemo(); disconnectWs(); setMode("off"); };
document.getElementById("btn-demo").onclick = startDemo;
document.getElementById("file-jsonl").onchange = (ev) => {
const f = ev.target.files[0];
if (!f) return;
stopDemo();
disconnectWs();
setMode("file");
const r = new FileReader();
r.onload = () => {
String(r.result).split("\n").forEach(handleLine);
log("Loaded file " + f.name);
};
r.readAsText(f);
};
window.addEventListener("resize", resize);
resize();
ensureSimulation();
startDemo();
})();
</script>
</body>
</html>

7
viz/sample-feed.jsonl Normal file
View File

@@ -0,0 +1,7 @@
{"type":"snapshot","nodes":[{"id":"relay-a","label":"relay-a","status":"active","latency_ms":14},{"id":"relay-b","label":"relay-b","status":"active","latency_ms":21},{"id":"edge-c","label":"edge-c","status":"idle","latency_ms":48}],"links":[{"source":"relay-a","target":"relay-b"},{"source":"relay-b","target":"edge-c"}]}
{"type":"hop","from":"relay-a","to":"relay-b","ms":18}
{"type":"hop","from":"relay-b","to":"edge-c","ms":33}
{"type":"node_status","id":"edge-c","status":"error","latency_ms":500}
{"type":"node_status","id":"edge-c","status":"idle","latency_ms":55}
{"type":"node_leave","id":"edge-c"}
{"type":"snapshot","nodes":[{"id":"relay-a","label":"relay-a","status":"active","latency_ms":14},{"id":"relay-b","label":"relay-b","status":"active","latency_ms":21}],"links":[{"source":"relay-a","target":"relay-b"}]}