59 Commits

Author SHA1 Message Date
d2ad0dd21a chore: add CCC logo asset 2026-05-04 14:48:14 +00:00
9e647f37d5 docs: add FAPP research paper LaTeX sources
Add paper directory with LaTeX source, bibliography, and Makefile
for the FAPP (Federated Application Protocol) research paper.
Build artifacts are gitignored.
2026-04-12 14:16:24 +00:00
da0085f1a6 feat: add observability module and wire MeshNode run() with background tasks
Add health checks (/healthz), Prometheus metrics export (/metricsz),
and tracing spans to the P2P mesh node. MeshNode.run() starts GC and
health server as background tasks, returning a RunHandle for lifecycle
management. Health endpoint returns 503 during graceful shutdown drain.
2026-04-11 17:52:03 +02:00
95ce8898fd feat: add mesh network visualizer
- D3.js force-directed graph for real-time mesh visualization
- WebSocket server (mesh-viz-bridge crate) for live updates
- Demo mode with simulated topology
- JSONL file upload for offline analysis
- Optional viz logging in mesh_node forwarding
2026-04-06 21:43:28 +02:00
99d36679c8 docs: add CLAUDE.md, unignore from .gitignore 2026-04-06 16:57:43 +02:00
a856f9bb53 feat: wire traffic resistance, implement v2 CLI commands, add auth expiry detection
Server:
- Wire traffic resistance decoy generator into main.rs startup behind
  --traffic-resistance flag + --decoy-interval-ms config (feature-gated)

Client:
- Implement v2 CLI one-shot commands: send, recv, dm, group create, group invite
  All previously printed "coming soon" — now fully functional with MLS state
  restoration, peer resolution, KeyPackage fetch, and MLS encryption pipeline

SDK:
- Add SdkError::SessionExpired variant + is_auth_expired() helper for
  detecting expired session tokens (RpcStatus::Unauthorized)
- Add ClientEvent::AuthExpired for UI-layer session expiry notification
2026-04-05 00:03:12 +02:00
f58ce2529d feat: add 11 features and bug fixes across server, SDK, and client
Server fixes:
- Wire v2 moderation handlers to ModerationService (SQL persistence) —
  bans now survive restarts instead of living in-memory DashMap
- Add admin role enforcement via QPC_ADMIN_KEYS env var for ban/unban
- Fix audit.rs now_iso8601() to emit actual ISO-8601 timestamps
- Add group admin authorization — only creator can remove members or
  update metadata

Server features:
- Add DeleteBlob RPC (method 602) with filesystem cleanup
- Register delete_blob in v2 handler method registry

SDK features:
- Add ClientEvent::IdentityKeyChanged for safety number change alerts
- Add ClientEvent::ReadReceipt and DeliveryConfirmation variants
- Add peer_identity_keys table with store/get methods for key tracking
- Add search_messages() full-text search across all conversations
- Add delete_conversation() with cascading message/outbox cleanup

Client features:
- Wire v2 TUI message sending to SDK MLS encryption pipeline
- Add /search command to v2 REPL with cross-conversation results
- Add /delete-conversation command to v2 REPL
- Add unread count badges in v1 TUI sidebar (yellow+bold styling)
2026-04-04 23:31:37 +02:00
4dadd01c6b feat: add E2E encryption module to meshservice
X25519 key agreement + HKDF-SHA256 + ChaCha20-Poly1305 AEAD for
opt-in payload encryption. Each message uses a fresh ephemeral key
for forward secrecy. 11 new tests cover roundtrip, wrong-key
rejection, tampering, wire format integration, and edge cases.
2026-04-03 10:48:16 +02:00
fb6b80c81c feat: wire FAPP message handling into mesh router
When a MeshEnvelope is delivered locally and its payload starts with a
known FAPP wire tag (0x01-0x05), MeshNode.process_incoming now delegates
to FappRouter instead of returning a raw Deliver action. Nodes without
FAPP capabilities still receive FAPP-tagged payloads as normal Deliver
actions, preserving backward compatibility.

Adds IncomingAction::Fapp variant, is_fapp_payload() helper, and three
integration tests covering the routing, passthrough, and no-router cases.
2026-04-03 07:44:19 +02:00
8eba12170e feat: integrate meshservice crate into workspace
- Add meshservice to workspace members
- Fix quicprochat-client: add MeshTrace/MeshStats slash commands
- Add integration test: meshservice_tcp_transport
- Document integration points in README and docs/status.md
- Verify shared identity (IdentityKeypair → MeshAddress)
2026-04-01 18:56:25 +02:00
a3023ecac1 docs: update status with MeshNode integration 2026-04-01 18:46:01 +02:00
150f30b0d6 feat(p2p): add MeshNode integrating all production modules
New mesh_node.rs providing a production-ready node:
- MeshNodeBuilder for fluent configuration
- MeshConfig integration for all settings
- MeshMetrics tracking for all operations
- Rate limiting on incoming messages
- Backpressure controller
- Graceful shutdown via ShutdownCoordinator
- Optional FappRouter based on capabilities
- MeshRouter for envelope routing
- TransportManager for multi-transport support

Key APIs:
- MeshNodeBuilder::new().fapp_relay().build()
- node.process_incoming() with rate limiting + metrics
- node.gc() for store/routing table cleanup
- node.shutdown() for graceful termination

222 tests passing (203 lib + 3 fapp_flow + 16 multi_node)
2026-04-01 18:45:41 +02:00
a60767a7eb docs: update status with FAPP E2E flow completion 2026-04-01 16:36:41 +02:00
6ae3251ebd feat(fapp): add full integration tests for FAPP flow
New tests/fapp_flow.rs with 3 integration tests:
- full_fapp_flow_announce_query_reserve_confirm: Complete flow
  from therapist announcement through patient reservation to
  confirmation with E2E encryption
- fapp_rejection_flow: Tests the rejection case
- fapp_query_filters: Tests Fachrichtung, PLZ, and other filters

FappRouter additions:
- register_therapist_key(): public method for key registration
- store_announce(): public method for storing announcements

Total tests: 217 (198 lib + 3 fapp_flow + 16 multi_node)
2026-04-01 16:35:57 +02:00
ad636b874b feat(fapp): add E2E encryption for SlotReserve/SlotConfirm
- E2E crypto using X25519 key exchange + ChaCha20-Poly1305
- PatientEphemeralKey: generates keypair for reservation
- TherapistCrypto: decrypts reserves, creates confirms with FS
- PatientCrypto: creates reserves, decrypts confirmations
- Wire format helpers for Reserve/Confirm CBOR serialization

FappRouter updates:
- Added DeliverReserve/DeliverConfirm action variants
- process_slot_reserve(): routes to therapist or floods
- process_slot_confirm(): delivers to patient
- send_reserve/send_confirm(): capability-checked sends
- send_response(): relay-to-patient response routing

FappStore additions:
- announces_iter(): iterate all announce vectors
- find_by_id(): lookup announce by ID

29 FAPP tests passing (24 fapp + 7 fapp_router + 5 new E2E crypto)
2026-04-01 16:34:05 +02:00
afaaf2c417 docs: update status with production infrastructure sprint 2026-04-01 09:22:02 +02:00
50a63a6b96 feat(p2p): add integration tests for production scenarios
16 integration tests covering:
- Rate limiting per-peer isolation
- Store-and-forward for offline peers
- Message deduplication
- Envelope V2 signatures, forwarding, broadcast
- Metrics tracking and snapshots
- Config validation and TOML roundtrip
- Shutdown coordination with task tracking
- Concurrent store access safety
- GC of expired messages

Total tests: 205 (189 lib + 16 integration)
2026-04-01 09:21:32 +02:00
a258f98a40 feat(p2p): add persistence and graceful shutdown
- persistence.rs: Append-only log storage for routing table,
  KeyPackage cache, and messages with compaction and GC
- shutdown.rs: Coordinated shutdown with phase transitions,
  task tracking, connection draining, and hook system

Enables stateful operation and clean restarts.
2026-04-01 09:19:13 +02:00
024b6c91d1 feat(p2p): add production infrastructure modules
- error.rs: Structured error types with context for all subsystems
  (transport, routing, crypto, protocol, store, config)
- config.rs: Runtime configuration with TOML parsing and validation
- metrics.rs: Counter/gauge/histogram metrics with transport-specific
  tracking and JSON-serializable snapshots
- rate_limit.rs: Token bucket rate limiting with per-peer tracking,
  duty cycle enforcement for LoRa, and backpressure control

These modules provide the foundation for production deployment.
2026-04-01 09:16:44 +02:00
ac36534063 docs: update status with mesh infrastructure progress
Completed in this session:
- KeyPackage distribution over mesh (announce-based)
- Transport capability negotiation
- MLS-Lite to full MLS upgrade path

Updated mesh-protocol-gaps.md to reflect completed items.
2026-04-01 09:01:44 +02:00
7be7287ba2 feat(mesh): add MLS-Lite to full MLS upgrade path
crypto_negotiation module enables transitioning between crypto modes:

GroupCryptoState tracks current mode:
- MlsLite (signed/unsigned)
- FullMls (classical/hybrid)
- Upgrading (transition state)

MlsLiteBootstrap derives MLS-Lite keys from MLS epoch secret:
- Enables fallback to MLS-Lite over constrained links
- Same group can use full MLS over WiFi, MLS-Lite over LoRa

Upgrade protocol:
1. Member sends KeyPackage over fast link
2. Creator creates MLS Welcome
3. Group transitions to full MLS
4. Optionally maintains MLS-Lite fallback for constrained links
2026-04-01 09:00:57 +02:00
3c6eebdb00 feat(mesh): add transport capability negotiation
TransportCapability enum classifies transports by bandwidth/MTU:
- Unconstrained (≥1 Mbps): Full MLS with PQ-KEM
- Medium (≥10 kbps): Full MLS classical
- Constrained (≥1 kbps): MLS-Lite with signature
- SeverelyConstrained (<1 kbps): MLS-Lite minimal

TransportManager now provides:
- best_transport() - highest capability transport
- recommended_crypto() - appropriate crypto mode
- supports_mls() - whether any transport handles full MLS
- select_for_size() - best transport for a given payload

CryptoMode enum with overhead estimates for each mode.
2026-04-01 08:59:43 +02:00
eee1e9f278 feat(mesh): add KeyPackage distribution over mesh
Implements announce-based KeyPackage distribution for serverless MLS:

- MeshAnnounce now includes optional `keypackage_hash` field (8 bytes)
- CAP_MLS_READY capability flag for nodes with KeyPackages
- KeyPackageCache for storing received KeyPackages:
  - Indexed by mesh address
  - Multiple per address (for rotation)
  - TTL-based expiry
  - Capacity-bounded with LRU eviction
- Mesh protocol messages:
  - KeyPackageRequest (request by address or hash)
  - KeyPackageResponse (KeyPackage + hash)
  - KeyPackageUnavailable (negative response)

Protocol flow:
1. Bob announces with keypackage_hash
2. Alice requests KeyPackage via mesh
3. Bob (or relay) responds with full KeyPackage
4. Alice creates MLS Welcome, sends to Bob via mesh
2026-04-01 08:57:49 +02:00
5d1688d89f docs: design generic Mesh Service Layer
Vision: FAPP is just one service on a generic platform.
Same infrastructure can support:
- Housing (rooms, flats)
- Repair (craftsmen)
- Tutoring
- Medical appointments
- Legal consultations
- Events/tickets
- Custom services

Key concepts:
- Service ID namespacing (32-bit)
- Generic ServiceMessage envelope
- ServiceRouter with pluggable handlers
- ServiceStore trait for per-service caching
- Generic verification framework
- Migration path for existing FAPP

Architecture:
  Applications → Service Layer → Mesh Layer → Transport
2026-04-01 08:02:39 +02:00
56331632fd feat(fapp): add security model + profile_url for verification
docs/specs/fapp-security.md:
- Full threat model for patient protection
- 3-level verification roadmap (transparency → endorsements → registry)
- UI warning mockups
- Technical implementation plan
- Honest assessment of limitations

SlotAnnounce changes:
- Added profile_url field for therapist verification
- New with_profile() constructor
- profile_url included in signature

docs/specs/fapp-protocol.md:
- Added Security & Anti-Fraud section
- Link to full security spec
2026-04-01 07:56:19 +02:00
12846bd2a0 docs: add Mesh & P2P features section to README
- Full table of mesh networking modules
- FAPP protocol explanation with code example
- Privacy model summary
- Link to protocol spec
2026-04-01 07:52:52 +02:00
dd2041df20 feat(fapp): add integration demo + update status
examples/fapp_demo.rs:
- Therapist publishes SlotAnnounce
- Relay caches and handles query
- Patient sends SlotQuery, gets response
- Shows full FappRouter API flow

docs/status.md:
- Updated FAPP integration status
- FappRouter now implemented
- Remaining: multi-node test, SlotReserve/Confirm, LoRa
2026-04-01 07:52:01 +02:00
65ce5aec18 feat(fapp): add FappRouter for mesh integration
New fapp_router.rs module:
- FappAction enum (Ignore, Dropped, Forward, QueryResponse)
- Wire format: 1-byte tag (0x01-0x05) + CBOR body
- FappRouter with shared RoutingTable and TransportManager
- handle_incoming() decodes and dispatches FAPP frames
- process_slot_announce() with relay/flood logic
- process_slot_query() answers from local FappStore
- broadcast_announce() / send_query() for outbound floods
- drain_pending_sends() for async send integration
- 3 unit tests

Also fixed borrow checker issue in FappStore::store
2026-04-01 07:47:33 +02:00
0b3d5c5100 docs: FAPP integration next steps + definition of done 2026-04-01 00:15:37 +02:00
cbfa7e16c4 feat: FAPP — Free Appointment Propagation Protocol for psychotherapy discovery 2026-03-31 09:29:41 +00:00
e2c04cf0c3 docs: update status with implementation sprint results
Completed S4-S5 and MLS-Lite implementation:
- MeshRouter with multi-hop routing
- REPL commands /mesh trace, /mesh stats
- MeshEnvelope V2 with truncated addresses
- MLS-Lite lightweight encryption

Key finding: Classical MLS (306B KeyPackage) IS LoRa-viable!
2026-03-30 23:54:05 +02:00
bcde8b733c docs: update mesh-protocol-gaps with actual measurements
Key findings from actual benchmarks:
- MLS KeyPackage: 306 bytes (6 LoRa fragments, ~4 sec)
- MLS Welcome: 840 bytes (17 fragments, ~10 sec)
- MLS-Lite: 129 bytes without sig, 262 with sig
- MeshEnvelope V2: 336 bytes (~18% savings over V1)

Classical MLS is LoRa-viable! Group setup takes ~14 sec at 1% duty.
Post-quantum hybrid (2.6KB KeyPackage) is still impractical.

Updated action items to reflect completed work:
- MLS-Lite implemented
- MeshEnvelope V2 implemented
- Size measurements complete
2026-03-30 23:53:27 +02:00
237f4360e4 fix: adjust CBOR overhead assertions to match actual measurements
CBOR with field names has higher overhead than raw binary formats.
Updated assertions to reflect actual measured sizes:
- MeshEnvelope V1: ~410 bytes (empty payload)
- MeshEnvelope V2: ~336 bytes (~18% savings from truncated addresses)
- MLS-Lite: ~129 bytes without sig, ~262 with sig

Also fixed serde compatibility for [u8; 64] signature arrays by
converting to Vec<u8>.
2026-03-30 23:52:13 +02:00
a055706236 feat(mesh): add MLS-Lite lightweight encryption for constrained links
MLS-Lite provides group encryption without full MLS overhead:
- Pre-shared group secret (QR code, NFC, or MLS epoch export)
- ChaCha20-Poly1305 symmetric encryption (same as MLS app messages)
- Per-message nonce from epoch + sequence
- Replay protection via sliding window
- Optional Ed25519 signatures

Wire overhead: ~41 bytes without signature, ~105 with signature
(vs ~174 bytes for MeshEnvelope V1)

Tradeoffs vs full MLS:
- No automatic post-compromise security (manual key rotation)
- No automatic forward secrecy (only per-epoch)
- Keys are pre-shared, not negotiated

Designed for SF12 LoRa where MLS KeyPackages are impractical.
2026-03-30 23:48:25 +02:00
9cbf824db6 feat(mesh): add MeshEnvelopeV2 with truncated 16-byte addresses
S5: Compact envelope format for constrained links:
- 16-byte truncated addresses (MeshAddress) instead of 32-byte keys
- 16-byte truncated content ID
- u16 TTL and u32 timestamp (smaller than V1)
- Priority field (Low/Normal/High/Emergency)
- ~30-50 bytes savings per envelope vs V1

Full public keys are exchanged during announce phase and cached in
routing table. Envelope only needs addresses for routing.
2026-03-30 23:46:24 +02:00
3f81837112 test: add MLS and MeshEnvelope size measurement tests
- measure_mls_wire_sizes: KeyPackage, Welcome, Commit, AppMessage sizes
- measure_mls_wire_sizes_hybrid: same with post-quantum mode
- measure_mesh_envelope_overhead: MeshEnvelope overhead for various payloads

These tests print actual byte sizes to inform constrained link
feasibility planning (LoRa SF12, MLS-Lite design).
2026-03-30 23:45:07 +02:00
db49d83fda feat(mesh): add /mesh trace and /mesh stats REPL commands
- /mesh trace <address> - show route to a mesh address (stub, needs MeshRouter integration)
- /mesh stats - show delivery statistics per destination (stub)
- /mesh store now shows actual message count from P2pNode when active
- Updated help text with new commands
2026-03-30 23:43:52 +02:00
9b09f09892 docs: update status with mesh gap analysis findings
Key insight: best-in-class crypto but unproven mesh efficiency.
Priority actions: complete S4, measure MLS sizes, design MLS-Lite.
2026-03-30 23:30:00 +02:00
92fefda41d docs: sharpen positioning with mesh focus and honest limitations
- New elevator pitch: "MLS + PQ-KEM over multi-hop mesh"
- Competitive differentiation table vs Meshtastic/Reticulum/Briar
- Acknowledge MLS overhead and KeyPackage distribution gaps
- Taglines: "Reticulum's mesh + Signal's crypto + post-quantum ready"
2026-03-30 23:29:56 +02:00
84ec822823 docs: add mesh protocol comparison (Reticulum, Meshtastic, Briar, Berty)
Technical comparison showing QuicProChat's differentiation:
- Only mesh protocol with MLS group encryption + PQ-KEM
- Multi-hop routing + LoRa support (like Reticulum)
- End-to-end crypto (relays see opaque ciphertext)

Honest about tradeoffs vs mature alternatives.
2026-03-30 23:29:50 +02:00
01bc2a4273 docs: add mesh protocol gap analysis and MLS-Lite design
Honest assessment of QuicProChat vs Reticulum/Meshtastic/Briar:
- MLS overhead (500-800 byte KeyPackages) impractical for SF12 LoRa
- KeyPackage distribution over mesh unsolved
- No lightweight mode for constrained links

MLS-Lite design proposes 41-byte overhead symmetric mode:
- ChaCha20-Poly1305 with HKDF key derivation
- Optional Ed25519 signatures
- Upgrade path to full MLS when faster transport available
- QR code / out-of-band key exchange
2026-03-30 23:29:44 +02:00
f9ac921a0c feat(p2p): mesh stack, LoRa mock transport, and relay demo
Implement transport abstraction (TCP/iroh), announce and routing table,
multi-hop mesh router, truncated-address link layer, and LoRa mock
medium with fragmentation plus EU868-style duty-cycle accounting.
Add mesh_lora_relay_demo and scripts/mesh-demo.sh. Relax CBOR vs JSON
size assertion to match fixed-size cryptographic overhead. Extend
.gitignore for nested targets and node_modules.

Made-with: Cursor
2026-03-30 21:19:12 +02:00
d469999c2a feat: add Termux build/setup scripts and client config example 2026-03-21 19:14:07 +01:00
f0901f6597 docs: add messenger comparison with WhatsApp, Telegram, and Signal 2026-03-21 19:14:07 +01:00
543bd442a3 chore: add sprint plan and mark all 7 sprints complete 2026-03-21 19:14:07 +01:00
266bcfed59 docs: add threat model, crypto boundaries, and audit scope documents
Security audit preparation:
- Threat model with STRIDE analysis and 5 threat actors
- Crypto boundaries documenting all 11 primitives and key lifecycle
- Audit scope document for external security firms
2026-03-21 19:14:07 +01:00
c256c38ffb docs: add crate-level documentation and public API doc comments
- Expand crate-level docs for quicprochat-rpc (architecture, wire format,
  module map) and quicprochat-sdk (connection lifecycle, event subscription,
  module descriptions).
- Add /// doc comments to all undocumented pub fn/struct/enum items in
  server domain services (keys, channels, devices, users, account, p2p,
  blobs) and domain types.
- Fix rustdoc broken intra-doc links in plugin-api (HookResult,
  qpc_plugin_init), federation/mod.rs (Store), and client main.rs
  (unescaped brackets).
2026-03-21 19:14:07 +01:00
416618f4cf feat: wire up federation message routing and P2P client fallback
- Enqueue handler checks resolve_destination() for remote recipients
- User resolution supports user@domain federated addresses
- P2P mesh commands (/mesh start, /mesh stop) wired into client session
- Federation routing integration tests with SqlStore
- Fix DashMap deadlock in validate_session()
2026-03-21 19:14:06 +01:00
872695e5f1 test: add unit tests for RPC framing, SDK state machine, and server domain services
Add comprehensive tests across three layers:
- RPC framing: empty payloads, max boundary, truncated frames, multi-frame buffers,
  all status codes, all method ID ranges, payload-too-large for response/push
- SDK: event broadcast send/receive, multiple subscribers, clone preservation,
  conversation upsert, missing conversation, message ID roundtrip, member keys
- Server domain: auth session validation/expiry, channel creation/symmetry/validation,
  delivery peek/ack/sequence ordering/fetch-limited, key package upload/fetch/validation,
  hybrid key batch fetch, size boundary tests
- CI: MSRV (1.75) check job, macOS cross-platform build check
2026-03-21 19:14:06 +01:00
e4c5868b31 feat: add client auto-reconnect, heartbeat, and connection status UI
RPC layer (quicprochat-rpc):
- RpcClient now uses tokio::sync::Mutex<Connection> for safe reconnection
- Auto-reconnect with exponential backoff + jitter on retriable errors
- QUIC-level keepalive via quinn TransportConfig
- subscribe_push() returns Option<PushFrame> with None sentinel on break
- RpcError::is_retriable() classifies transient vs permanent errors
- ConnectionState enum (Connected/Reconnecting/Disconnected) with Display
- Configurable max_retries, base_delay, max_backoff, keepalive_secs

SDK layer (quicprochat-sdk):
- QpqClient wraps RpcClient in Arc for safe heartbeat task sharing
- start_heartbeat() spawns background task checking connection every 30s
- connection_state() exposes RPC-layer state to UI
- Reconnecting event added to ClientEvent enum
- disconnect() aborts heartbeat before closing connection

Client UI (quicprochat-client):
- TUI status bar shows Connected/Reconnecting.../Offline with color
- TUI handles Reconnecting event with attempt count display
- REPL event listener prints connection state changes
- REPL /status shows connection state instead of bool
- Both TUI and REPL call start_heartbeat() on startup
2026-03-21 19:14:06 +01:00
66eca065e0 feat: add in-flight RPC tracking, plugin shutdown hooks, and graceful drain
Replace the fixed 30s sleep-based shutdown drain with actual in-flight RPC
tracking using an Arc<AtomicUsize> counter and RAII InFlightGuard. On
SIGTERM/SIGINT the server now:

1. Stops accepting new client and federation connections
2. Sends QUIC CONNECTION_CLOSE with reason "server shutting down"
3. Polls the in-flight counter until it reaches 0 (or drain timeout)
4. Logs drain progress as RPCs complete
5. Calls plugin on_shutdown hooks before exit

Also adds:
- on_shutdown hook to HookVTable (C-ABI plugin API) and ServerHooks trait
- server_in_flight_rpcs Prometheus gauge metric
- Federation connection tracking via shared in-flight counter
2026-03-21 19:14:06 +01:00
a05da9b751 feat: upgrade OpenMLS 0.5 → 0.8 for security patches and GREASE support
Migrates all MLS code in quicprochat-core from OpenMLS 0.5 to 0.8:
- StorageProvider replaces OpenMlsKeyStore (keystore.rs full rewrite)
- HybridCryptoProvider updated for new OpenMlsProvider trait
- Group operations updated for new API signatures
- MLS state persistence via MemoryStorage serialization
- tls_codec 0.3 → 0.4, openmls_traits/rust_crypto 0.2 → 0.5
2026-03-21 19:14:06 +01:00
077f48f19c feat: wire up storage latency metrics, uptime gauge, and config timeouts
Instrument DeliveryService (enqueue, fetch) and KeyService
(key_package_upload, key_package_fetch) with storage latency histogram
recording. Add periodic uptime gauge task (every 15s). Log effective
rpc_timeout_secs, storage_timeout_secs, and webtransport_listen at
startup to eliminate dead_code warnings on EffectiveConfig fields.
2026-03-21 19:14:06 +01:00
3708b8df41 fix: remove TUI boolean bug, P2P unwrap violation, and WebTransport placeholder
- Remove `|| true` from cursor positioning condition in v2_tui.rs
- Replace .lock().unwrap() with .expect() in P2P routing tests
- Remove assert!(true) placeholder in WebTransport test
2026-03-21 19:14:06 +01:00
b98dcc27ae chore: rename quicproquo → quicprochat in SECURITY.md 2026-03-21 19:14:06 +01:00
2e081ead8e chore: rename quicproquo → quicprochat in docs, Docker, CI, and packaging
Rename all project references from quicproquo/qpq to quicprochat/qpc
across documentation, Docker configuration, CI workflows, packaging
scripts, operational configs, and build tooling.

- Docker: crate paths, binary names, user/group, data dirs, env vars
- CI: workflow crate references, binary names, artifact names
- Docs: all markdown files under docs/, SDK READMEs, book.toml
- Packaging: OpenWrt Makefile, init script, UCI config (file renames)
- Scripts: justfile, dev-shell, screenshot, cross-compile, ai_team
- Operations: Prometheus config, alert rules, Grafana dashboard
- Config: .env.example (QPQ_* → QPC_*), CODEOWNERS paths
- Top-level: README, CONTRIBUTING, ROADMAP, CLAUDE.md
2026-03-21 19:14:06 +01:00
a710037dde chore: rename quicproquo → quicprochat in Rust workspace
Rename all crate directories, package names, binary names, proto
package/module paths, ALPN strings, env var prefixes, config filenames,
mDNS service names, and plugin ABI symbols from quicproquo/qpq to
quicprochat/qpc.
2026-03-21 19:14:06 +01:00
d8c1392587 chore: public-readiness cleanup
- Remove default Grafana password (fail loudly if unset)
- Clean up stale delivery-proof TODO (already implemented at RPC layer)
- Document TUI send as local-only, point to REPL for E2E delivery
- Gitignore AI workflow files (CLAUDE.md, master-prompt.md, ai_team.py)
- Remove 5 orphaned v1 crates (bot, ffi, gen, gui, mobile)
- Commit ROADMAP.html updates
2026-03-21 19:14:05 +01:00
a9d1f535aa chore: prepare repository for public release
- Add split licensing: AGPL-3.0 for server, Apache-2.0/MIT for all
  other crates and SDKs (Signal-style)
- Add SECURITY.md with vulnerability disclosure policy
- Add CONTRIBUTING.md with build, test, and code standards
- Add "not audited" security disclaimer to README
- Add workspace package metadata (license, repository, keywords)
- Move internal planning docs to docs/internal/ (gitignored)
2026-03-21 19:14:05 +01:00
504 changed files with 32553 additions and 16427 deletions

View File

@@ -1,20 +1,20 @@
# quicproquo Production Environment Variables
# quicprochat Production Environment Variables
# Copy this file to .env and fill in the values.
# Server auth token (required, >= 16 characters)
QPQ_AUTH_TOKEN=
QPC_AUTH_TOKEN=
# SQLCipher database encryption key (required for store_backend=sql)
QPQ_DB_KEY=
QPC_DB_KEY=
# Ports (defaults shown)
QPQ_LISTEN_PORT=7000
QPQ_WS_PORT=9000
QPC_LISTEN_PORT=7000
QPC_WS_PORT=9000
# Optional features
QPQ_SEALED_SENDER=false
QPQ_REDACT_LOGS=true
QPQ_WS_LISTEN=
QPC_SEALED_SENDER=false
QPC_REDACT_LOGS=true
QPC_WS_LISTEN=
# Grafana admin password
GRAFANA_ADMIN_PASSWORD=changeme
# Grafana admin password (required — must be strong, no default)
GRAFANA_ADMIN_PASSWORD=

20
.github/CODEOWNERS vendored
View File

@@ -1,4 +1,4 @@
# Code owners for quicproquo. PRs require review from owners.
# Code owners for quicprochat. PRs require review from owners.
# See https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
# Replace 'maintainers' with your GitHub user/team handle.
@@ -6,32 +6,32 @@
* @maintainers
# Security-critical: crypto primitives, MLS, hybrid KEM
/crates/quicproquo-core/ @maintainers
/crates/quicprochat-core/ @maintainers
# Wire format: protobuf definitions, Cap'n Proto schemas
/crates/quicproquo-proto/ @maintainers
/crates/quicprochat-proto/ @maintainers
/proto/ @maintainers
# Auth and server-side domain logic
/crates/quicproquo-server/ @maintainers
/crates/quicprochat-server/ @maintainers
# Client SDK: auth, conversation store, messaging pipeline
/crates/quicproquo-sdk/ @maintainers
/crates/quicprochat-sdk/ @maintainers
# CLI/TUI client
/crates/quicproquo-client/ @maintainers
/crates/quicprochat-client/ @maintainers
# RPC framework: framing, middleware, QUIC transport
/crates/quicproquo-rpc/ @maintainers
/crates/quicprochat-rpc/ @maintainers
# Key transparency
/crates/quicproquo-kt/ @maintainers
/crates/quicprochat-kt/ @maintainers
# Plugin ABI (no_std C-ABI boundary)
/crates/quicproquo-plugin-api/ @maintainers
/crates/quicprochat-plugin-api/ @maintainers
# P2P transport
/crates/quicproquo-p2p/ @maintainers
/crates/quicprochat-p2p/ @maintainers
# CI and infrastructure
/.github/ @maintainers

View File

@@ -35,7 +35,7 @@ jobs:
${{ runner.os }}-bench-
- name: Run benchmarks
run: cargo bench --package quicproquo-core -- --output-format=bencher 2>&1 | tee bench-output.txt
run: cargo bench --package quicprochat-core -- --output-format=bencher 2>&1 | tee bench-output.txt
- name: Upload HTML reports
uses: actions/upload-artifact@v4

View File

@@ -102,7 +102,7 @@ jobs:
- name: Run coverage
run: |
cargo tarpaulin --workspace \
--exclude quicproquo-p2p \
--exclude quicprochat-p2p \
--out xml \
--output-dir coverage/ \
-- --test-threads 1
@@ -113,6 +113,57 @@ jobs:
name: coverage-report
path: coverage/cobertura.xml
msrv:
name: MSRV Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install MSRV Rust (1.75)
uses: dtolnay/rust-action@1.75
with:
components: clippy
- name: Install capnp
run: sudo apt-get update && sudo apt-get install -y capnproto
- name: Cache cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-msrv-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-msrv-
- name: Check MSRV
run: cargo check --workspace
macos:
name: macOS Build Check
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-action@stable
- name: Cache cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
- name: Check build
run: cargo check --workspace
docker:
name: Docker Build
runs-on: ubuntu-latest

View File

@@ -43,11 +43,11 @@ jobs:
CARGO_PROFILE_RELEASE_CODEGEN_UNITS: '1'
CARGO_PROFILE_RELEASE_STRIP: symbols
run: |
cargo zigbuild --release --target ${{ matrix.target }} --bin qpq-server
cargo zigbuild --release --target ${{ matrix.target }} --bin qpc-server
- name: Check binary size
run: |
BINARY="target/${{ matrix.target }}/release/qpq-server"
BINARY="target/${{ matrix.target }}/release/qpc-server"
SIZE=$(stat -c%s "$BINARY")
SIZE_MB=$(echo "scale=2; $SIZE / 1048576" | bc)
echo "Binary size: ${SIZE_MB} MB"
@@ -60,6 +60,6 @@ jobs:
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: qpq-server-${{ matrix.target }}
path: target/${{ matrix.target }}/release/qpq-server
name: qpc-server-${{ matrix.target }}
path: target/${{ matrix.target }}/release/qpc-server
retention-days: 30

19
.gitignore vendored
View File

@@ -1,4 +1,6 @@
/target
**/target/
node_modules/
**/*.rs.bk
.vscode/
gitea-mcp.json
@@ -16,4 +18,19 @@ data/
*.convdb-shm
*.convdb-wal
*.pending.ks
qpq-server.toml
qpc-server.toml
# Internal planning docs (not for public distribution)
docs/internal/
# AI development workflow files
master-prompt.md
scripts/ai_team.py
# LaTeX build artifacts
paper/*.aux
paper/*.bbl
paper/*.blg
paper/*.log
paper/*.out
paper/*.pdf

View File

@@ -1,23 +1,63 @@
# quicproquo — Claude Code Instructions
# product.quicproquo
## Agent Team Workflow Rules
End-to-end encrypted group messaging over QUIC with MLS key agreement and post-quantum crypto.
### NEVER delete worktrees before preserving changes
When using agent teams with `isolation: "worktree"`:
1. **Before calling `TeamDelete`**, always check each worktree for uncommitted or committed changes
2. **Create a named branch** from each worktree's HEAD and push/preserve it before cleanup
3. **Preferred pattern**: use `git branch fix/<name> <worktree-HEAD-sha>` to save the work
4. If an agent reports changes, its worktree branch MUST be merged or saved before the team is deleted
## Tech Stack
### Agent team best practices
- Always have agents **commit their changes** with descriptive messages before shutting them down
- After all agents report, **list worktrees** (`git worktree list`) and **save branches** before cleanup
- When using worktree isolation, the sequence must be: agents finish → save branches → merge → TeamDelete
- Never call TeamDelete as a shortcut to kill zombie agents — use `rm -rf ~/.claude/teams/<name>` for the team metadata only, preserving worktree dirs
- Rust 1.75+, Cargo workspace (12 crates)
- Crypto: OpenMLS 0.8, ML-KEM-768, X25519, ChaCha20-Poly1305, OPAQUE-KE
- Networking: Quinn (QUIC), Tokio, Tower middleware
- Serialization: Protobuf (prost) for v2, Cap'n Proto (legacy v1)
- DB: rusqlite with bundled SQLCipher
- Build: just (justfile), cargo-deny for supply chain audit
### Git workflow
- Conventional commits: `feat:`, `fix:`, `chore:`, `docs:`, `test:`, `refactor:`
- GPG-signed commits only
- No `Co-authored-by` trailers
- No `.unwrap()` on crypto or I/O in non-test paths
- Secrets: zeroize on drop, never in logs
## Commands
```bash
just build # Build all workspace crates
just test # Run all tests
just test-core # Crypto tests only
just lint # clippy --workspace -- -D warnings
just fmt # Format check
just fmt-fix # Format fix
just proto # Rebuild protobuf codegen
just server # Build server binary
just client # Build client binary
cargo deny check # Supply chain audit (deny.toml)
```
## Architecture
```
crates/
quicprochat-core/ # Crypto primitives, MLS, double ratchet
quicprochat-proto/ # Protobuf definitions + prost codegen
quicprochat-rpc/ # RPC framework over QUIC
quicprochat-sdk/ # High-level client SDK
quicprochat-server/ # Server binary
quicprochat-client/ # CLI client binary
quicprochat-p2p/ # P2P mesh via iroh (feature-gated: `mesh`)
quicprochat-plugin-api/ # Plugin interface
quicprochat-kt/ # Kotlin/JNI bindings
meshservice/ # Generic decentralized service layer (FAPP, Housing)
apps/gui/ # GUI application
proto/ # .proto source files
schemas/ # Data schemas
docker/ # Container configs
```
## Rules
- `clippy::unwrap_used` is **deny** workspace-wide -- use proper error handling
- `unsafe_code` is **warn** -- avoid unless absolutely necessary, document why
- P2P crate (`quicprochat-p2p`) pulls ~90 extra deps via iroh -- only compiled with `mesh` feature
- All crypto operations must go through quicprochat-core, never inline crypto
- Protobuf is the v2 wire format; Cap'n Proto is legacy v1 only
## Do NOT
- Use `.unwrap()` or `.expect()` outside tests -- clippy will deny it
- Add crypto primitives outside of quicprochat-core
- Enable the `mesh` feature by default (heavy dependency tree)
- Mix v1 (capnp) and v2 (protobuf) serialization in new code
- Skip `cargo deny check` before adding new dependencies

40
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,40 @@
# Contributing to quicprochat
## Prerequisites
- **Rust toolchain** (stable) via [rustup](https://rustup.rs/)
- **protoc** is vendored via the `protobuf-src` crate -- no system installation needed
- Git with GPG signing configured
## Building and Testing
```sh
cargo build --workspace
cargo test --workspace
```
A `justfile` is also available for common tasks (`just build`, `just test`, `just proto`, etc.).
## Code Standards
### Commits
- **Conventional commits**: `feat:`, `fix:`, `docs:`, `chore:`, `test:`, `refactor:`
- Commits must be **GPG-signed**
- Commit messages describe *why*, not just *what*
- No `Co-authored-by` trailers
### Rust
- No `.unwrap()` on crypto or I/O operations outside of tests
- Secrets must be zeroized on drop and never logged
- No stubs, `todo!()`, or `unimplemented!()` in production code
- Prefer clarity over cleverness; avoid unnecessary abstractions
## Security Vulnerabilities
Do not open public issues for security bugs. See [SECURITY.md](SECURITY.md) for responsible disclosure instructions.
## Licensing
The server crate (`quicprochat-server`) is licensed under **AGPL-3.0**. All other crates are dual-licensed under **Apache-2.0 / MIT**. By submitting a contribution, you agree to license your work under the applicable license(s).

921
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,29 +1,42 @@
[workspace]
resolver = "2"
members = [
"crates/quicproquo-core",
"crates/quicproquo-proto",
"crates/quicproquo-plugin-api",
"crates/quicproquo-kt",
"crates/quicproquo-rpc",
"crates/quicproquo-sdk",
"crates/quicproquo-server",
"crates/quicproquo-client",
"crates/quicprochat-core",
"crates/quicprochat-proto",
"crates/quicprochat-plugin-api",
"crates/quicprochat-kt",
"crates/quicprochat-rpc",
"crates/quicprochat-sdk",
"crates/quicprochat-server",
"crates/quicprochat-client",
# P2P crate uses iroh (~90 extra deps). Only compiled when the `mesh`
# feature is enabled on quicproquo-client.
"crates/quicproquo-p2p",
# feature is enabled on quicprochat-client.
"crates/quicprochat-p2p",
# Generic decentralized service layer (FAPP, Housing, etc.)
"crates/meshservice",
# WebSocket bridge for viz/mesh-graph.html (tails NDJSON → browsers)
"viz/bridge",
]
[workspace.package]
edition = "2021"
rust-version = "1.75"
repository = "https://github.com/quicprochat/quicprochat"
description = "End-to-end encrypted group messaging over QUIC"
keywords = ["encryption", "messaging", "quic", "mls", "post-quantum"]
categories = ["cryptography", "network-programming"]
# Shared dependency versions — bump here to affect the whole workspace.
[workspace.dependencies]
# ── Crypto ────────────────────────────────────────────────────────────────────
openmls = { version = "0.5", default-features = false, features = ["crypto-subtle"] }
openmls_rust_crypto = { version = "0.2" }
openmls_traits = { version = "0.2" }
# tls_codec must match the version used by openmls 0.5 (which uses 0.3) to avoid
openmls = { version = "0.8" }
openmls_rust_crypto = { version = "0.5" }
openmls_traits = { version = "0.5" }
openmls_memory_storage = { version = "0.5" }
# tls_codec must match the version used by openmls 0.8 (which uses 0.4) to avoid
# duplicate Serialize trait versions in the dependency graph.
tls_codec = { version = "0.3", features = ["derive"] }
tls_codec = { version = "0.4", features = ["derive"] }
# ml-kem 0.2 is the current stable release (FIPS 203, ML-KEM-768).
ml-kem = { version = "0.2" }
x25519-dalek = { version = "2", features = ["static_secrets"] }
@@ -79,7 +92,8 @@ tracing-subscriber = { version = "0.3", features = ["env-filter"] }
anyhow = { version = "1" }
thiserror = { version = "1" }
# ── CLI ───────────────────────────────────────────────────────────────────────
# ── Config / CLI ──────────────────────────────────────────────────────────────
toml = { version = "0.8" }
clap = { version = "4", features = ["derive", "env"] }
rustyline = { version = "14" }

30
LICENSE Normal file
View File

@@ -0,0 +1,30 @@
quicproquo — Split Licensing
============================
This project uses a split license model similar to Signal:
Server (quicproquo-server)
--------------------------
Licensed under the GNU Affero General Public License v3.0 only.
See LICENSE-AGPL-3.0 for the full text.
SPDX-License-Identifier: AGPL-3.0-only
Libraries and SDKs (all other crates)
--------------------------------------
Licensed under either of
* Apache License, Version 2.0 (LICENSE-APACHE)
* MIT License (LICENSE-MIT)
at your option.
SPDX-License-Identifier: Apache-2.0 OR MIT
Contribution
------------
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in this project by you, as defined in the
Apache-2.0 license, shall be dual licensed as above (for library crates)
or AGPL-3.0-only (for the server crate), without any additional terms or
conditions.

661
LICENSE-AGPL-3.0 Normal file
View File

@@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

199
LICENSE-APACHE Normal file
View File

@@ -0,0 +1,199 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by the Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding any notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. Please also get an
OpenPGP-compatible signature on any file you distribute.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

21
LICENSE-MIT Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) quicproquo contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

119
README.md
View File

@@ -1,14 +1,15 @@
<p align="center">
<img src="assets/logo.png" alt="quicproquo" width="160">
<img src="assets/logo.png" alt="quicprochat" width="160">
</p>
<h1 align="center">quicproquo</h1>
<h1 align="center">quicprochat</h1>
<p align="center">
<strong>End-to-end encrypted group messaging over QUIC, powered by MLS and post-quantum cryptography.</strong>
</p>
<p align="center">
<a href="docs/src/design-rationale/messenger-comparison.md">Why quicprochat?</a> &middot;
<a href="ROADMAP.md">Roadmap</a> &middot;
<a href="docs/sdk/index.md">SDK Docs</a> &middot;
<a href="docs/operations/monitoring.md">Operations</a> &middot;
@@ -17,7 +18,7 @@
---
quicproquo is a production-grade messenger where the server **never sees plaintext**. All traffic flows over QUIC/TLS 1.3, group keys are negotiated with the [MLS protocol](https://www.rfc-editor.org/rfc/rfc9420) (RFC 9420), and a hybrid X25519 + ML-KEM-768 KEM provides post-quantum confidentiality. Written in Rust. 45,000 lines of code. 301 tests.
quicprochat is a production-grade messenger where the server **never sees plaintext**. All traffic flows over QUIC/TLS 1.3, group keys are negotiated with the [MLS protocol](https://www.rfc-editor.org/rfc/rfc9420) (RFC 9420), and a hybrid X25519 + ML-KEM-768 KEM provides post-quantum confidentiality. Written in Rust. 45,000 lines of code. 301 tests.
```
┌─────────────────────────────────────────────────┐
@@ -53,17 +54,17 @@ cargo build --workspace
cargo test --workspace
# Start the server (auto-generates self-signed TLS cert)
cargo run --bin qpq-server -- --allow-insecure-auth
cargo run --bin qpc-server -- --allow-insecure-auth
# Interactive REPL (registers + logs in automatically)
cargo run --bin qpq -- repl --username alice --password secret
cargo run --bin qpc -- repl --username alice --password secret
```
**Two-terminal demo:**
```bash
# Terminal 1 # Terminal 2
qpq repl -u alice -p secretA qpq repl -u bob -p secretB
qpc repl -u alice -p secretA qpc repl -u bob -p secretB
# Alice: # Bob sees:
/dm bob [alice] Hello, Bob!
@@ -73,19 +74,20 @@ Hello, Bob!
## Architecture
```
quicproquo/
quicprochat/
├── crates/
│ ├── quicproquo-core # MLS, hybrid KEM, PQ Noise, OPAQUE, recovery, padding
│ ├── quicproquo-proto # Protobuf (prost) + Cap'n Proto generated types
│ ├── quicproquo-rpc # QUIC RPC framework (framing, dispatch, middleware)
│ ├── quicproquo-sdk # Client SDK (QpqClient, conversation store, outbox)
│ ├── quicproquo-server # QUIC server, 33 RPC methods, domain services, plugins
│ ├── quicproquo-client # CLI + REPL + TUI (Ratatui)
│ ├── quicproquo-kt # Key transparency (Merkle-log, revocation)
│ ├── quicproquo-p2p # iroh P2P, mesh identity, store-and-forward
│ ├── quicproquo-ffi # C FFI (libquicproquo_ffi.so)
── quicproquo-plugin-api # Dynamic plugin hooks (C ABI)
├── proto/qpq/v1/ # 15 .proto schema files
│ ├── quicprochat-core # MLS, hybrid KEM, PQ Noise, OPAQUE, recovery, padding
│ ├── quicprochat-proto # Protobuf (prost) + Cap'n Proto generated types
│ ├── quicprochat-rpc # QUIC RPC framework (framing, dispatch, middleware)
│ ├── quicprochat-sdk # Client SDK (QpqClient, conversation store, outbox)
│ ├── quicprochat-server # QUIC server, 33 RPC methods, domain services, plugins
│ ├── quicprochat-client # CLI + REPL + TUI (Ratatui)
│ ├── quicprochat-kt # Key transparency (Merkle-log, revocation)
│ ├── quicprochat-p2p # iroh P2P, mesh identity, store-and-forward
│ ├── meshservice # Decentralized service layer (FAPP, housing, wire format)
── quicprochat-ffi # C FFI (libquicprochat_ffi.so)
│ └── quicprochat-plugin-api # Dynamic plugin hooks (C ABI)
├── proto/qpc/v1/ # 15 .proto schema files
├── sdks/ # Go, Python, TypeScript, Swift, Kotlin, Java, Ruby
├── docs/ # mdBook docs, SDK guides, operational runbooks
└── packaging/ # OpenWrt, Docker, cross-compilation
@@ -128,11 +130,66 @@ quicproquo/
- **Dynamic plugins** — load `.so`/`.dylib` at runtime via `--plugin-dir` (6 hook points)
- **Mesh networking** — iroh P2P, mDNS discovery, store-and-forward, broadcast channels
### Mesh & P2P Features
The `quicprochat-p2p` crate provides a full **serverless mesh networking stack**:
| Feature | Module | Description |
|---------|--------|-------------|
| **P2P Transport** | `P2pNode` | Direct QUIC connections via iroh with NAT traversal |
| **Mesh Identity** | `MeshIdentity` | Ed25519 keypairs with 16-byte truncated addresses |
| **Mesh Envelope** | `MeshEnvelope` | Encrypted, signed, TTL-aware message containers |
| **Store-and-Forward** | `MeshStore` | Queue messages for offline recipients |
| **Multi-Hop Routing** | `MeshRouter` | Distributed routing table, forward through intermediaries |
| **Announce Protocol** | `MeshAnnounce` | Signed peer discovery with capability flags |
| **Broadcast Channels** | `BroadcastManager` | Pub/sub with symmetric key encryption |
| **Transport Abstraction** | `TransportManager` | Iroh, TCP, LoRa — route by address type |
| **LoRa Transport** | `transport_lora` | Duty-cycle aware, fragmentation, SF12 support |
| **MLS-Lite** | `mls_lite` | Lightweight symmetric mode for constrained links |
| **FAPP** | `fapp` + `fapp_router` | Free Appointment Propagation Protocol (see below) |
#### FAPP — Decentralized Appointment Discovery
**Problem:** In Germany, finding a psychotherapist takes 36 months due to artificial slot visibility limits.
**Solution:** FAPP lets licensed therapists announce free slots into the mesh. Patients discover and reserve slots anonymously — no central registry.
```rust
// Therapist publishes slots
let announce = SlotAnnounce::new(
&therapist_identity,
vec![Fachrichtung::Verhaltenstherapie],
vec![Modalitaet::Praxis, Modalitaet::Video],
vec![Kostentraeger::GKV],
"80331", // PLZ only, never exact address
slots,
approbation_hash,
sequence,
);
fapp_router.broadcast_announce(announce)?;
// Patient queries anonymously
let query = SlotQuery {
fachrichtung: Some(Fachrichtung::Verhaltenstherapie),
plz_prefix: Some("803".into()),
kostentraeger: Some(Kostentraeger::GKV),
..Default::default()
};
fapp_router.send_query(query)?;
```
**Privacy model:**
- Therapist identity is **public** (bound to Approbation hash)
- Patient queries are **anonymous** (no identifying information)
- Reservations use **E2E encryption** to therapist's key
See [`docs/specs/fapp-protocol.md`](docs/specs/fapp-protocol.md) for the full protocol spec.
### Client SDKs
| Language | Location | Transport | Notes |
|---|---|---|---|
| **Rust** | `crates/quicproquo-sdk` | QUIC (quinn) | Reference implementation |
| **Rust** | `crates/quicprochat-sdk` | QUIC (quinn) | Reference implementation |
| **Go** | `sdks/go/` | QUIC (quic-go) | Cap'n Proto RPC, full API |
| **Python** | `sdks/python/` | QUIC (aioquic) + FFI | Async client, PyPI-ready |
| **TypeScript** | `sdks/typescript/` | WebSocket + WASM crypto | 175 KB WASM bundle, browser demo |
@@ -165,8 +222,8 @@ quicproquo/
### Docker
```bash
docker build -t quicproquo -f docker/Dockerfile .
docker run -p 7000:7000 -v qpq-data:/data quicproquo
docker build -t quicprochat -f docker/Dockerfile .
docker run -p 7000:7000 -v qpc-data:/data quicprochat
```
### Production (Docker Compose)
@@ -190,13 +247,13 @@ See [docs/openwrt.md](docs/openwrt.md) for `opkg` packaging and `procd` init scr
```bash
# Environment variables (see .env.example for full list)
QPQ_LISTEN=0.0.0.0:7000
QPQ_AUTH_TOKEN=your-strong-token
QPQ_DB_KEY=your-db-encryption-key
QPQ_STORE_BACKEND=sql
QPQ_METRICS_LISTEN=0.0.0.0:9090
QPQ_DRAIN_TIMEOUT=30
QPQ_RPC_TIMEOUT=30
QPC_LISTEN=0.0.0.0:7000
QPC_AUTH_TOKEN=your-strong-token
QPC_DB_KEY=your-db-encryption-key
QPC_STORE_BACKEND=sql
QPC_METRICS_LISTEN=0.0.0.0:9090
QPC_DRAIN_TIMEOUT=30
QPC_RPC_TIMEOUT=30
```
## Documentation
@@ -210,11 +267,9 @@ mdbook serve docs # http://localhost:3000
- [Scaling Guide](docs/operations/scaling-guide.md) — resource sizing, horizontal scaling, capacity planning
- [Monitoring](docs/operations/monitoring.md) — Prometheus metrics, Grafana dashboards, alert rules
## Security
## Security Status
This project has **not undergone a formal third-party audit**. See the [threat model](docs/src/cryptography/threat-model.md) for details.
If you discover a security vulnerability, please report it privately.
> **This software has not undergone an independent security audit.** While it implements cryptographic best practices (MLS RFC 9420, OPAQUE, zeroization, constant-time comparisons), no third-party firm has reviewed the implementation. Do not rely on it for high-risk communications until an audit is completed. See [SECURITY.md](SECURITY.md) for our vulnerability disclosure policy.
## License

View File

@@ -3,7 +3,7 @@
<head>
<!-- Book generated using mdBook -->
<meta charset="UTF-8">
<title>Full Roadmap (Phases 18) - quicproquo</title>
<title>Full Roadmap (Phases 1-8) - quicproquo</title>
<!-- Custom HTML head -->
@@ -35,10 +35,10 @@
const path_to_root = "";
const default_light_theme = "navy";
const default_dark_theme = "navy";
window.path_to_searchindex_js = "searchindex-92ce38c7.js";
window.path_to_searchindex_js = "searchindex-1e4ee6e2.js";
</script>
<!-- Start loading toc.js asap -->
<script src="toc-4c7c920d.js"></script>
<script src="toc-69b0eb95.js"></script>
</head>
<body>
<div id="mdbook-help-container">
@@ -185,7 +185,7 @@ can be parallelised. Check the box when done.</p>
<p>Eliminate all crash paths, enforce secure defaults, fix deployment blockers.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>1.1 Remove <code>.unwrap()</code> / <code>.expect()</code> from production paths</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>1.1 Remove <code>.unwrap()</code> / <code>.expect()</code> from production paths</strong></p>
<ul>
<li>Replace <code>AUTH_CONTEXT.read().expect()</code> in client RPC with proper <code>Result</code></li>
<li>Replace <code>"0.0.0.0:0".parse().unwrap()</code> in client with fallible parse</li>
@@ -194,7 +194,7 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>1.2 Enforce secure defaults in production mode</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>1.2 Enforce secure defaults in production mode</strong></p>
<ul>
<li>Reject startup if <code>QPQ_PRODUCTION=true</code> and <code>auth_token</code> is empty or <code>"devtoken"</code></li>
<li>Require non-empty <code>db_key</code> when using SQL backend in production</li>
@@ -203,14 +203,14 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>1.3 Fix <code>.gitignore</code></strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>1.3 Fix <code>.gitignore</code></strong></p>
<ul>
<li>Add <code>data/</code>, <code>*.der</code>, <code>*.pem</code>, <code>*.db</code>, <code>*.bin</code> (state files), <code>*.ks</code> (keystores)</li>
<li>Verify no secrets are already tracked: <code>git ls-files data/ *.der *.db</code></li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>1.4 Fix Dockerfile</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>1.4 Fix Dockerfile</strong></p>
<ul>
<li>Sync workspace members (handle excluded <code>p2p</code> crate)</li>
<li>Create dedicated user/group instead of <code>nobody</code></li>
@@ -219,7 +219,7 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>1.5 TLS certificate lifecycle</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>1.5 TLS certificate lifecycle</strong></p>
<ul>
<li>Document CA-signed cert setup (Lets Encrypt / custom CA)</li>
<li>Add <code>--tls-required</code> flag that refuses to start without valid cert</li>
@@ -233,7 +233,7 @@ can be parallelised. Check the box when done.</p>
<p>Build confidence before adding features.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>2.1 Expand E2E test coverage</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>2.1 Expand E2E test coverage</strong></p>
<ul>
<li>Auth failure scenarios (wrong password, expired token, invalid token)</li>
<li>Message ordering verification (send N messages, verify seq numbers)</li>
@@ -246,7 +246,7 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>2.2 Add unit tests for untested paths</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>2.2 Add unit tests for untested paths</strong></p>
<ul>
<li>Client retry logic (exponential backoff, jitter, retriable classification)</li>
<li>REPL input parsing edge cases (empty input, special characters, <code>/</code> commands)</li>
@@ -256,7 +256,7 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>2.3 CI hardening</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>2.3 CI hardening</strong></p>
<ul>
<li>Add <code>.github/CODEOWNERS</code> (crypto, auth, wire-format require 2 reviewers)</li>
<li>Ensure <code>cargo deny check</code> runs on every PR (already in CI — verify)</li>
@@ -266,7 +266,7 @@ can be parallelised. Check the box when done.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>2.4 Clean up build warnings</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>2.4 Clean up build warnings</strong></p>
<ul>
<li>Fix Capn Proto generated <code>unused_parens</code> warnings</li>
<li>Remove dead code / unused imports</li>
@@ -328,7 +328,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>3.2 Python SDK (<code>quicproquo-py</code>)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>3.2 Python SDK (<code>quicproquo-py</code>)</strong></p>
<ul>
<li>QUIC transport: <code>aioquic</code> with custom Capn Proto stream handler</li>
<li>Capn Proto serialization: <code>pycapnp</code> for message types</li>
@@ -357,7 +357,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>3.5 WebTransport server endpoint</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>3.5 WebTransport server endpoint</strong></p>
<ul>
<li>Add HTTP/3 + WebTransport listener to server (same QUIC stack via quinn)</li>
<li>Capn Proto RPC framed over WebTransport bidirectional streams</li>
@@ -378,7 +378,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>3.7 SDK documentation and schema publishing</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>3.7 SDK documentation and schema publishing</strong></p>
<ul>
<li>Publish <code>.capnp</code> schemas as the canonical API contract</li>
<li>Document the QUIC + Capn Proto connection pattern for each language</li>
@@ -401,7 +401,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>4.2 Key Transparency / revocation</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>4.2 Key Transparency / revocation</strong></p>
<ul>
<li>Replace <code>BasicCredential</code> with X.509-based MLS credentials</li>
<li>Or: verifiable key directory (Merkle tree, auditable log)</li>
@@ -418,7 +418,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>4.4 M7 — Post-quantum MLS integration</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>4.4 M7 — Post-quantum MLS integration</strong></p>
<ul>
<li>Integrate hybrid KEM (X25519 + ML-KEM-768) into the OpenMLS crypto provider</li>
<li>Group key material gets post-quantum confidentiality</li>
@@ -439,7 +439,7 @@ WASM/FFI for the crypto layer.</p>
<p>Make it a product people want to use.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>5.1 Multi-device support</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>5.1 Multi-device support</strong></p>
<ul>
<li>Account → multiple devices, each with own Ed25519 key + MLS KeyPackages</li>
<li>Device graph management (add device, remove device, list devices)</li>
@@ -448,7 +448,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>5.2 Account recovery</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>5.2 Account recovery</strong></p>
<ul>
<li>Recovery codes or backup key (encrypted, stored by user)</li>
<li>Option: server-assisted recovery with security questions (lower security)</li>
@@ -456,7 +456,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>5.3 Full MLS lifecycle</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>5.3 Full MLS lifecycle</strong></p>
<ul>
<li>Member removal (Remove proposal → Commit → fan-out)</li>
<li>Credential update (Update proposal for key rotation)</li>
@@ -483,7 +483,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>5.6 Abuse prevention and moderation</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>5.6 Abuse prevention and moderation</strong></p>
<ul>
<li>Block user (client-side, suppress display)</li>
<li>Report message (encrypted report to admin key)</li>
@@ -491,7 +491,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>5.7 Offline message queue (client-side)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>5.7 Offline message queue (client-side)</strong></p>
<ul>
<li>Queue messages when disconnected, send on reconnect</li>
<li>Idempotent message IDs to prevent duplicates</li>
@@ -504,7 +504,7 @@ WASM/FFI for the crypto layer.</p>
<p>Prepare for real traffic.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>6.1 Distributed rate limiting</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.1 Distributed rate limiting</strong></p>
<ul>
<li>Current: in-memory per-process, lost on restart</li>
<li>Move to Redis or shared state for multi-node deployments</li>
@@ -512,7 +512,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>6.2 Multi-node / horizontal scaling</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.2 Multi-node / horizontal scaling</strong></p>
<ul>
<li>Stateless server design (already mostly there — state is in storage backend)</li>
<li>Shared PostgreSQL or CockroachDB backend (replace SQLite)</li>
@@ -521,7 +521,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>6.3 Operational runbook</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.3 Operational runbook</strong></p>
<ul>
<li>Backup / restore procedures (SQLCipher, file backend)</li>
<li>Key rotation (auth token, TLS cert, DB encryption key)</li>
@@ -531,7 +531,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>6.4 Connection draining and graceful shutdown</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.4 Connection draining and graceful shutdown</strong></p>
<ul>
<li>Stop accepting new connections on SIGTERM</li>
<li>Wait for in-flight RPCs (configurable timeout, default 30s)</li>
@@ -540,7 +540,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>6.5 Request-level timeouts</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.5 Request-level timeouts</strong></p>
<ul>
<li>Per-RPC timeout (prevent slow clients from holding resources)</li>
<li>Database query timeout</li>
@@ -548,7 +548,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>6.6 Observability enhancements</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>6.6 Observability enhancements</strong></p>
<ul>
<li>Request correlation IDs (trace across RPC → storage)</li>
<li>Storage operation latency metrics</li>
@@ -563,7 +563,7 @@ WASM/FFI for the crypto layer.</p>
<p>Long-term vision for wide adoption.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>7.1 Mobile clients (iOS + Android)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.1 Mobile clients (iOS + Android)</strong></p>
<ul>
<li>Use C FFI (Phase 3.3) for crypto + transport (single library)</li>
<li>Push notifications via APNs / FCM (server sends notification on enqueue)</li>
@@ -572,7 +572,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>7.2 Web client (browser)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.2 Web client (browser)</strong></p>
<ul>
<li>Use WASM (Phase 3.4) for crypto</li>
<li>Use WebTransport (Phase 3.5) for native QUIC transport</li>
@@ -583,7 +583,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>7.3 Federation</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.3 Federation</strong></p>
<ul>
<li>Server-to-server protocol via Capn Proto RPC over QUIC (see <code>federation.capnp</code>)</li>
<li><code>relayEnqueue</code>, <code>proxyFetchKeyPackage</code>, <code>federationHealth</code> methods</li>
@@ -601,7 +601,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>7.5 Additional language SDKs</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.5 Additional language SDKs</strong></p>
<ul>
<li>Java/Kotlin: JNI bindings to C FFI (Phase 3.3) + native QUIC (netty-quic)</li>
<li>Swift: Swift wrapper over C FFI + Network.framework QUIC</li>
@@ -610,7 +610,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>7.6 P2P / NAT traversal</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.6 P2P / NAT traversal</strong></p>
<ul>
<li>Direct peer-to-peer via iroh (foundation exists in <code>quicproquo-p2p</code>)</li>
<li>Server as fallback relay only</li>
@@ -619,7 +619,7 @@ WASM/FFI for the crypto layer.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>7.7 Traffic analysis resistance</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>7.7 Traffic analysis resistance</strong></p>
<ul>
<li>Padding messages to uniform size</li>
<li>Decoy traffic to mask timing patterns</li>
@@ -706,7 +706,7 @@ functions without any central infrastructure or internet uplink.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>F7 — OpenWrt cross-compilation guide</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>F7 — OpenWrt cross-compilation guide</strong></p>
<ul>
<li>Musl static builds: <code>x86_64-unknown-linux-musl</code>, <code>armv7-unknown-linux-musleabihf</code>, <code>mips-unknown-linux-musl</code></li>
<li>Strip binary: <code>--release</code> + <code>strip</code> → target size &lt; 5 MB for flash storage</li>
@@ -716,7 +716,7 @@ functions without any central infrastructure or internet uplink.</p>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>F8 — Traffic analysis resistance for mesh</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>F8 — Traffic analysis resistance for mesh</strong></p>
<ul>
<li>Uniform message padding to nearest 256-byte boundary (hides message size)</li>
<li>Configurable decoy traffic rate (fake messages to mask send timing)</li>
@@ -731,7 +731,7 @@ functions without any central infrastructure or internet uplink.</p>
and lower the barrier to entry for non-crypto developers.</p>
<ul>
<li>
<p><input disabled="" type="checkbox"> <strong>9.1 Criterion Benchmark Suite (<code>qpq-bench</code>)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.1 Criterion Benchmark Suite (<code>qpq-bench</code>)</strong></p>
<ul>
<li>Criterion benchmarks for all crypto primitives: hybrid KEM encap/decap,
MLS group-add at 10/100/1000 members, epoch rotation, Noise_XX handshake</li>
@@ -748,7 +748,7 @@ MLS group-add at 10/100/1000 members, epoch rotation, Noise_XX handshake</li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>9.3 Full-Screen TUI (Ratatui + Crossterm)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.3 Full-Screen TUI (Ratatui + Crossterm)</strong></p>
<ul>
<li><code>qpq tui</code> launches a full-screen terminal UI: message pane, input bar,
channel sidebar with unread counts, MLS epoch indicator</li>
@@ -757,7 +757,7 @@ channel sidebar with unread counts, MLS epoch indicator</li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>9.4 Delivery Proof Canary Tokens</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.4 Delivery Proof Canary Tokens</strong></p>
<ul>
<li>Server signs <code>Ed25519(SHA-256(message_id || recipient || timestamp))</code> on enqueue</li>
<li>Sender stores proof locally — cryptographic evidence the server queued the message</li>
@@ -765,7 +765,7 @@ channel sidebar with unread counts, MLS epoch indicator</li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>9.5 Verifiable Transcript Archive</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.5 Verifiable Transcript Archive</strong></p>
<ul>
<li><code>GroupMember::export_transcript(path, password)</code> writes encrypted, tamper-evident
message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)</li>
@@ -774,7 +774,7 @@ message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)</li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>9.6 Key Transparency (Merkle-Log Identity Binding)</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.6 Key Transparency (Merkle-Log Identity Binding)</strong></p>
<ul>
<li>Append-only Merkle log of (username, identity_key) bindings in the AS</li>
<li>Clients receive inclusion proofs alongside key fetches</li>
@@ -792,7 +792,7 @@ message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)</li>
</ul>
</li>
<li>
<p><input disabled="" type="checkbox"> <strong>9.8 PQ Noise Transport Layer</strong></p>
<p><input disabled="" type="checkbox" checked=""> <strong>9.8 PQ Noise Transport Layer</strong></p>
<ul>
<li>Hybrid <code>Noise_XX + ML-KEM-768</code> handshake for post-quantum transport security</li>
<li>Closes the harvest-now-decrypt-later gap on handshake metadata (ADR-006)</li>
@@ -840,7 +840,7 @@ message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)</li>
<span class=fa-svg><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><!--! Font Awesome Free 6.2.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2022 Fonticons, Inc. --><path d="M41.4 233.4c-12.5 12.5-12.5 32.8 0 45.3l160 160c12.5 12.5 32.8 12.5 45.3 0s12.5-32.8 0-45.3L109.3 256 246.6 118.6c12.5-12.5 12.5-32.8 0-45.3s-32.8-12.5-45.3 0l-160 160z"/></svg></span>
</a>
<a rel="next prefetch" href="contributing/coding-standards.html" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<a rel="next prefetch" href="operations/monitoring.html" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<span class=fa-svg><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><!--! Font Awesome Free 6.2.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2022 Fonticons, Inc. --><path d="M278.6 233.4c12.5 12.5 12.5 32.8 0 45.3l-160 160c-12.5 12.5-32.8 12.5-45.3 0s-12.5-32.8 0-45.3L210.7 256 73.4 118.6c-12.5-12.5-12.5-32.8 0-45.3s32.8-12.5 45.3 0l160 160z"/></svg></span>
</a>
@@ -854,7 +854,7 @@ message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)</li>
<span class=fa-svg><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><!--! Font Awesome Free 6.2.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2022 Fonticons, Inc. --><path d="M41.4 233.4c-12.5 12.5-12.5 32.8 0 45.3l160 160c12.5 12.5 32.8 12.5 45.3 0s12.5-32.8 0-45.3L109.3 256 246.6 118.6c12.5-12.5 12.5-32.8 0-45.3s-32.8-12.5-45.3 0l-160 160z"/></svg></span>
</a>
<a rel="next prefetch" href="contributing/coding-standards.html" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<a rel="next prefetch" href="operations/monitoring.html" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<span class=fa-svg><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><!--! Font Awesome Free 6.2.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2022 Fonticons, Inc. --><path d="M278.6 233.4c12.5 12.5 12.5 32.8 0 45.3l-160 160c-12.5 12.5-32.8 12.5-45.3 0s-12.5-32.8 0-45.3L210.7 256 73.4 118.6c-12.5-12.5-12.5-32.8 0-45.3s32.8-12.5 45.3 0l160 160z"/></svg></span>
</a>
</nav>

View File

@@ -1,4 +1,4 @@
# Roadmap — quicproquo
# Roadmap — quicprochat
> From proof-of-concept to production-grade E2E encrypted messaging.
>
@@ -18,7 +18,7 @@ Eliminate all crash paths, enforce secure defaults, fix deployment blockers.
- Audit: `grep -rn 'unwrap()\|expect(' crates/` outside `#[cfg(test)]`
- [x] **1.2 Enforce secure defaults in production mode**
- Reject startup if `QPQ_PRODUCTION=true` and `auth_token` is empty or `"devtoken"`
- Reject startup if `QPC_PRODUCTION=true` and `auth_token` is empty or `"devtoken"`
- Require non-empty `db_key` when using SQL backend in production
- Refuse to auto-generate TLS certs in production mode (require existing cert+key)
- Already partially implemented — verify and harden the validation in `config.rs`
@@ -30,8 +30,8 @@ Eliminate all crash paths, enforce secure defaults, fix deployment blockers.
- [x] **1.4 Fix Dockerfile**
- Sync workspace members (handle excluded `p2p` crate)
- Create dedicated user/group instead of `nobody`
- Set writable `QPQ_DATA_DIR` with correct permissions
- Test: `docker build . && docker run --rm -it qpq-server --help`
- Set writable `QPC_DATA_DIR` with correct permissions
- Test: `docker build . && docker run --rm -it qpc-server --help`
- [x] **1.5 TLS certificate lifecycle**
- Document CA-signed cert setup (Let's Encrypt / custom CA)
@@ -121,27 +121,27 @@ WASM/FFI for the crypto layer.
### Implementation
- [x] **3.1 Go SDK (`quicproquo-go`)**
- [x] **3.1 Go SDK (`quicprochat-go`)**
- Generated Go types from `node.capnp` (6487-line codegen, all 24 RPC methods)
- QUIC transport via `quic-go` with TLS 1.3 + ALPN `"capnp"`
- High-level `qpq` package: Connect, Health, ResolveUser, CreateChannel, Send/SendWithTTL, Receive/ReceiveWait, DeleteAccount, OPAQUE auth
- High-level `qpc` package: Connect, Health, ResolveUser, CreateChannel, Send/SendWithTTL, Receive/ReceiveWait, DeleteAccount, OPAQUE auth
- Example CLI in `sdks/go/cmd/example/`
- [x] **3.2 Python SDK (`quicproquo-py`)**
- [x] **3.2 Python SDK (`quicprochat-py`)**
- QUIC transport: `aioquic` with custom Cap'n Proto stream handler
- Cap'n Proto serialization: `pycapnp` for message types
- Manual RPC framing: length-prefixed request/response over QUIC stream
- Async/await API matching the Rust client patterns
- Crypto: PyO3 bindings to `quicproquo-core` for MLS operations
- Publish: PyPI `quicproquo`
- Crypto: PyO3 bindings to `quicprochat-core` for MLS operations
- Publish: PyPI `quicprochat`
- Example: async bot client
- [x] **3.3 C FFI layer (`quicproquo-ffi`)**
- `crates/quicproquo-ffi` with 7 extern "C" functions: connect, login, send, receive, disconnect, last_error, free_string
- Builds as `libquicproquo_ffi.so` / `.dylib` / `.dll`
- Python ctypes wrapper in `examples/python/qpq_client.py`
- [x] **3.3 C FFI layer (`quicprochat-ffi`)**
- `crates/quicprochat-ffi` with 7 extern "C" functions: connect, login, send, receive, disconnect, last_error, free_string
- Builds as `libquicprochat_ffi.so` / `.dylib` / `.dll`
- Python ctypes wrapper in `examples/python/qpc_client.py`
- [x] **3.4 WASM compilation of `quicproquo-core`**
- [x] **3.4 WASM compilation of `quicprochat-core`**
- `wasm-pack build` target producing 175 KB WASM bundle (LTO + opt-level=s)
- 13 `wasm_bindgen` functions: Ed25519 identity, hybrid KEM, safety numbers, sealed sender, padding
- Browser-ready with `crypto.getRandomValues()` RNG
@@ -156,7 +156,7 @@ WASM/FFI for the crypto layer.
- Configurable port: `--webtransport-listen 0.0.0.0:7443`
- Feature-flagged: `--features webtransport`
- [x] **3.6 TypeScript/JavaScript SDK (`@quicproquo/client`)**
- [x] **3.6 TypeScript/JavaScript SDK (`@quicprochat/client`)**
- `QpqClient` class: connect, offline, health, resolveUser, createChannel, send/sendWithTTL, receive, deleteAccount
- WASM crypto wrapper: generateIdentity, sign/verify, hybridEncrypt/Decrypt, computeSafetyNumber, sealedSend, pad
- WebSocket transport with request/response correlation and reconnection
@@ -317,17 +317,17 @@ Long-term vision for wide adoption.
- [x] **7.4 Sealed Sender**
- Sender identity inside MLS ciphertext only (server can't see who sent)
- `sealed_sender` module in quicproquo-core with seal/unseal API
- `sealed_sender` module in quicprochat-core with seal/unseal API
- WASM-accessible via `wasm_bindgen` for browser use
- [x] **7.5 Additional language SDKs**
- Java/Kotlin: JNI bindings to C FFI (Phase 3.3) + native QUIC (netty-quic)
- Swift: Swift wrapper over C FFI + Network.framework QUIC
- Ruby: FFI bindings via `quicproquo-ffi`
- Ruby: FFI bindings via `quicprochat-ffi`
- Evaluate demand-driven — only build SDKs people request
- [x] **7.6 P2P / NAT traversal**
- Direct peer-to-peer via iroh (foundation exists in `quicproquo-p2p`)
- Direct peer-to-peer via iroh (foundation exists in `quicprochat-p2p`)
- Server as fallback relay only
- Reduces latency and single-point-of-failure
- Ref: `FUTURE-IMPROVEMENTS.md § 6.1`
@@ -342,35 +342,35 @@ Long-term vision for wide adoption.
## Phase 8 — Freifunk / Community Mesh Networking
Make qpq a first-class citizen on decentralised, community-operated wireless
networks (Freifunk, BATMAN-adv/Babel routing, OpenWrt). Multiple qpq nodes form
Make qpc a first-class citizen on decentralised, community-operated wireless
networks (Freifunk, BATMAN-adv/Babel routing, OpenWrt). Multiple qpc nodes form
a federated mesh; clients auto-discover nearby nodes via mDNS; the network
functions without any central infrastructure or internet uplink.
### Architecture
```
Client A ─── mDNS discovery ──► nearby qpq node (LAN / mesh)
Client A ─── mDNS discovery ──► nearby qpc node (LAN / mesh)
Cap'n Proto federation
remote qpq node (across mesh)
remote qpc node (across mesh)
```
- [x] **F0 — Re-include `quicproquo-p2p` in workspace; fix ALPN strings**
- Moved `crates/quicproquo-p2p` from `exclude` back into `[workspace] members`
- Fixed ALPN `b"quicnprotochat/p2p/1"``b"quicproquo/p2p/1"` (breaking wire change)
- Fixed federation ALPN `b"qnpc-fed"``b"quicproquo/federation/1"`
- [x] **F0 — Re-include `quicprochat-p2p` in workspace; fix ALPN strings**
- Moved `crates/quicprochat-p2p` from `exclude` back into `[workspace] members`
- Fixed ALPN `b"quicnprotochat/p2p/1"``b"quicprochat/p2p/1"` (breaking wire change)
- Fixed federation ALPN `b"qnpc-fed"``b"quicprochat/federation/1"`
- Feature-gated behind `--features mesh` on client (keeps iroh out of default builds)
- [x] **F1 — Federation routing in message delivery**
- `handle_enqueue` and `handle_batch_enqueue` call `federation::routing::resolve_destination()`
- Recipients with a remote home server are relayed via `FederationClient::relay_enqueue()`
- mTLS mutual authentication between nodes (both present client certs, validated against shared CA)
- Config: `QPQ_FEDERATION_LISTEN`, `QPQ_LOCAL_DOMAIN`, `QPQ_FEDERATION_CERT/KEY/CA`
- Config: `QPC_FEDERATION_LISTEN`, `QPC_LOCAL_DOMAIN`, `QPC_FEDERATION_CERT/KEY/CA`
- [x] **F2 — mDNS local peer discovery**
- Server announces `_quicproquo._udp.local.` on startup via `mdns-sd`
- Server announces `_quicprochat._udp.local.` on startup via `mdns-sd`
- Client: `MeshDiscovery::start()` browses for nearby nodes (feature-gated)
- REPL commands: `/mesh peers` (scan + list), `/mesh server <host:port>` (note address)
- Nodes announce: `ver=1`, `server=<host:port>`, `domain=<local_domain>` TXT records
@@ -378,7 +378,7 @@ functions without any central infrastructure or internet uplink.
- [x] **F3 — Self-sovereign mesh identity**
- Ed25519 keypair-based identity independent of AS registration
- JSON-persisted seed + known peers directory
- Sign/verify operations for mesh authenticity (`crates/quicproquo-p2p/src/identity.rs`)
- Sign/verify operations for mesh authenticity (`crates/quicprochat-p2p/src/identity.rs`)
- [x] **F4 — Store-and-forward with TTL**
- `MeshEnvelope` with TTL-based expiry, hop_count tracking, max_hops routing limit
@@ -419,7 +419,7 @@ functions without any central infrastructure or internet uplink.
Features designed to attract contributors, create demo/showcase potential,
and lower the barrier to entry for non-crypto developers.
- [x] **9.1 Criterion Benchmark Suite (`qpq-bench`)**
- [x] **9.1 Criterion Benchmark Suite (`qpc-bench`)**
- Criterion benchmarks for all crypto primitives: hybrid KEM encap/decap,
MLS group-add at 10/100/1000 members, epoch rotation, Noise_XX handshake
- CI publishes HTML benchmark reports as GitHub Actions artifacts
@@ -431,7 +431,7 @@ and lower the barrier to entry for non-crypto developers.
- Available in WASM via `compute_safety_number` binding
- [x] **9.3 Full-Screen TUI (Ratatui + Crossterm)**
- `qpq tui` launches a full-screen terminal UI: message pane, input bar,
- `qpc tui` launches a full-screen terminal UI: message pane, input bar,
channel sidebar with unread counts, MLS epoch indicator
- Feature-gated `--features tui` to keep ratatui/crossterm out of default builds
- Existing REPL and CLI subcommands are unaffected
@@ -444,7 +444,7 @@ and lower the barrier to entry for non-crypto developers.
- [x] **9.5 Verifiable Transcript Archive**
- `GroupMember::export_transcript(path, password)` writes encrypted, tamper-evident
message archive (CBOR records, Argon2id + ChaCha20-Poly1305, Merkle chain)
- `qpq export verify` CLI command independently verifies chain integrity
- `qpc export verify` CLI command independently verifies chain integrity
- Useful for legal discovery, audit, or personal backup
- [x] **9.6 Key Transparency (Merkle-Log Identity Binding)**

29
SECURITY.md Normal file
View File

@@ -0,0 +1,29 @@
# Security Policy
## Supported Versions
Only the current `main` branch is supported with security updates.
## Reporting a Vulnerability
**Do not use public GitHub issues to report security vulnerabilities.**
Instead, email **security@quicprochat.org** with:
- A description of the vulnerability
- Steps to reproduce or a proof of concept
- The affected component(s) and potential impact
We will acknowledge your report within **48 hours** and work with you on a fix under a **90-day coordinated disclosure** timeline.
## What Qualifies
- Cryptographic implementation bugs (MLS, Noise, hybrid KEM, key derivation)
- Authentication or authorization bypass
- Key material leakage (memory, logs, network)
- Protocol-level flaws (replay, downgrade, impersonation)
- Any issue that compromises message confidentiality or integrity
## Credit
Reporters are credited in published security advisories unless they prefer to remain anonymous. Let us know your preference when you report.

229
SPRINTS.md Normal file
View File

@@ -0,0 +1,229 @@
# quicprochat — Sprint Plan
> 7 sprints synthesized from code audit, architecture analysis, and ecosystem research.
> Each sprint is ~1 week. Sprints are ordered by priority and dependency.
---
## Sprint 1 — Bug Fixes & Code Quality (Quick Wins)
Fix all known bugs, clippy warnings, and dead code before building on top.
- [x] **1.1 Fix boolean logic bug in TUI**
- `crates/quicprochat-client/src/client/v2_tui.rs:832` — remove `|| true`
- Cursor positioning always executes regardless of input state
- [x] **1.2 Fix unwrap violations in P2P router**
- `crates/quicprochat-p2p/src/routing.rs:416,419``.lock().unwrap()` on Mutex
- Replace with `.expect("lock poisoned")` or proper error handling
- [x] **1.3 Remove placeholder assertion in WebTransport**
- `crates/quicprochat-server/src/webtransport.rs:418``assert!(true);`
- [x] **1.4 Wire up unused metrics**
- `record_storage_latency()` — instrument storage layer calls
- `record_uptime_seconds()` — add periodic heartbeat task in server main loop
- [x] **1.5 Wire up or remove unused config fields**
- `EffectiveConfig::webtransport_listen` — connect to WebTransport listener
- `EffectiveConfig::rpc_timeout_secs` — apply as per-RPC deadline
- `EffectiveConfig::storage_timeout_secs` — apply as DB query timeout
- [x] **1.6 Fix remaining clippy warnings**
- Reduce function arity (2 functions with 8-9 args → use config/param structs)
- Remove useless `format!()` call
- Collapse nested conditionals
- Rename `from_str` method to avoid `FromStr` trait confusion
---
## Sprint 2 — OpenMLS 0.5 → 0.8 Migration
**CRITICAL**: OpenMLS 0.7.2 includes security patches. Staying on 0.5 is a risk.
- [x] **2.1 Migrate StorageProvider trait**
- Old `OpenMlsKeyStore` → new `StorageProvider` (most invasive change)
- Rework `DiskKeyStore` integration (must keep bincode serialization)
- Update all `group.rs` calls that interact with the key store
- [x] **2.2 Update MLS API calls**
- `self_update()` / `propose_self_update()` — add `LeafNodeParameters` arg
- `join_by_external_commit()` — add optional LeafNode params
- `Sender::NewMember` → split into `NewMemberProposal` / `NewMemberCommit`
- [x] **2.3 Handle GREASE support**
- New variants in `ProposalType`, `ExtensionType`, `CredentialType`
- Update match arms to handle unknown/GREASE values
- [x] **2.4 Update AAD handling**
- AAD no longer persisted — set before every API call generating `MlsMessageOut`
- [x] **2.5 Verify FIPS 203 alignment**
- Confirm ML-KEM-768 parameters match final FIPS 203 (not draft)
- Review hybrid KEM against RFC 9794 combination methods
- [x] **2.6 Full test suite pass**
- All 301 tests must pass with OpenMLS 0.8
- Run crypto benchmarks to check for performance regressions
---
## Sprint 3 — Client Resilience
Currently, network glitches cause the client to hang. This blocks v2 launch.
- [x] **3.1 Auto-reconnect with backoff**
- Integrate existing `retry.rs` into `RpcClient::call()` path
- Exponential backoff with jitter (already implemented, not wired)
- Configurable max retries and backoff ceiling
- [x] **3.2 Push subscription recovery**
- Detect broken push stream and re-subscribe automatically
- Buffer missed events during reconnection window
- [x] **3.3 Heartbeat / keepalive**
- Periodic QUIC ping in TUI and REPL modes
- Detect dead connections before user notices
- [x] **3.4 SDK disconnect lifecycle**
- Add `QpcClient::disconnect()` for clean shutdown
- Proper state machine: Connected → Reconnecting → Disconnected
- [x] **3.5 Connection status UI**
- TUI: show connection state in status bar (Connected / Reconnecting / Offline)
- REPL: print status change notifications
---
## Sprint 4 — Server Hardening
Fix graceful shutdown and wire up timeouts for production readiness.
- [x] **4.1 In-flight RPC tracking**
- Replace fixed 30s shutdown delay with actual in-flight RPC counter
- Drain when counter reaches zero (with configurable max wait)
- [x] **4.2 Apply request-level timeouts**
- Wire `rpc_timeout_secs` config into per-RPC deadline enforcement
- Wire `storage_timeout_secs` into DB query timeouts
- Cancel long-running operations cleanly
- [x] **4.3 Plugin shutdown hooks**
- Add `on_shutdown` hook to `HookVTable`
- Call plugin shutdown before server exits
- [x] **4.4 Federation drain during shutdown**
- Stop accepting federation relay requests on SIGTERM
- Wait for in-flight federation RPCs before exit
- [x] **4.5 Connection draining improvements**
- Send QUIC CONNECTION_CLOSE with application reason
- WebTransport: send close frame before dropping sessions
---
## Sprint 5 — Test Coverage & CI Hardening
Address the major test coverage gaps identified in the audit.
- [x] **5.1 RPC framing unit tests**
- `crates/quicprochat-rpc/src/framing.rs` — encode/decode edge cases
- Malformed frames, truncated input, max-size payloads
- Fuzzing harness for frame parser
- [x] **5.2 SDK state machine tests**
- `crates/quicprochat-sdk/src/conversation.rs` — conversation lifecycle
- `crates/quicprochat-sdk/src/groups.rs` — group join/leave/update
- `crates/quicprochat-sdk/src/messaging.rs` — send/receive/queue
- [x] **5.3 Server domain service tests**
- `crates/quicprochat-server/src/domain/` — all service modules
- Test business logic without DB (mock storage trait)
- [x] **5.4 Integration tests**
- Reconnection scenario (kill server, restart, verify client recovers)
- Graceful shutdown (send SIGTERM during active RPCs, verify drain)
- Multi-node federation relay (if federation wired in Sprint 6)
- [x] **5.5 CI hardening**
- Add MSRV check (Rust 1.75 or declared minimum)
- Add cross-platform CI (macOS, Windows — at least build check)
- Add cargo-fuzz for crypto and parsing code
- Add MIRI for unsafe code in plugin-api/FFI
---
## Sprint 6 — Federation & P2P Integration
Wire up the scaffolded federation and P2P code into working features.
- [x] **6.1 Federation message routing**
- Wire `federation::routing::resolve_destination()` into `handle_enqueue`
- Route messages to remote home servers via `FederationClient::relay_enqueue()`
- Resolve protocol mismatch (Cap'n Proto federation vs Protobuf main RPC)
- [x] **6.2 Federation identity resolution**
- Cross-server user lookup (`user@remote-server`)
- KeyPackage fetching across federated nodes
- [x] **6.3 P2P client integration**
- Wire iroh P2P into client as transport option
- Fallback logic: prefer P2P direct → fall back to server relay
- mDNS discovery in client (already scaffolded, needs activation)
- [x] **6.4 Multipath QUIC evaluation**
- Research draft-ietf-quic-multipath (likely RFC in 2026)
- Prototype: use multiple paths for mesh relay resilience
- Decision: adopt or defer based on quinn support
- [x] **6.5 Federation integration tests**
- Two-server test: register on A, send to user on B, verify delivery
- mTLS mutual auth verification
- Partition tolerance (one node goes down, messages queue)
---
## Sprint 7 — Documentation, Polish & Future Prep
Final polish and forward-looking improvements.
- [x] **7.1 Crate-level documentation**
- Add module-level docs to `quicprochat-plugin-api`, `quicprochat-rpc`, `quicprochat-sdk`
- Doc comments for all public APIs in domain services
- [x] **7.2 Refactor high-arity functions** (none found — already clean)
- Consolidate 8-9 parameter functions into config/param structs
- Improve builder patterns where appropriate
- [ ] **7.3 Review RFC 9750 (MLS Architecture)** (deferred — requires manual review)
- Verify quicprochat's AS/DS split aligns with RFC 9750 recommendations
- Document any deviations and rationale
- [ ] **7.4 Desktop client evaluation** (deferred — requires Tauri prototype)
- Prototype Tauri v2 desktop shell wrapping the TUI or a web UI
- Evaluate effort to ship cross-platform desktop client
- [x] **7.5 Security pre-audit prep**
- Document all crypto boundaries and trust assumptions
- Create threat model document
- Prepare scope document for external auditors (Roadmap item 4.1)
- Budget: NCC Group / Trail of Bits / Cure53 ($50K$150K, 4-6 weeks)
- [ ] **7.6 Repository rename** (requires GitHub admin action)
- Rename GitHub repository from `quicproquo``quicprochat`
- Update all GitHub URLs, CI badge links, go.mod import paths
- Set up redirect from old repo name
---
## Sprint Summary
| Sprint | Focus | Risk | Key Deliverable |
|--------|-------|------|----------------|
| **1** | Bug fixes & code quality | Low | Zero clippy warnings, metrics wired |
| **2** | OpenMLS 0.5 → 0.8 | High | Security patches applied, FIPS 203 verified |
| **3** | Client resilience | Medium | Auto-reconnect, heartbeat, status UI |
| **4** | Server hardening | Medium | Real graceful shutdown, timeouts enforced |
| **5** | Test coverage & CI | Low | Unit tests for SDK/RPC/domain, fuzzing |
| **6** | Federation & P2P | High | Working cross-server messaging, P2P fallback |
| **7** | Docs, polish & audit prep | Low | Audit-ready, desktop prototype |

BIN
assets/logo-ccc.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@@ -0,0 +1,45 @@
[package]
name = "meshservice"
version = "0.1.0"
edition = "2021"
authors = ["Chris <c@xorwell.de>"]
description = "Generic decentralized service layer for mesh networks"
license = "MIT"
repository = "https://git.xorwell.de/c/meshservice"
keywords = ["mesh", "p2p", "decentralized", "services"]
categories = ["network-programming"]
[dependencies]
# Serialization
serde = { version = "1.0", features = ["derive"] }
ciborium = "0.2"
# Crypto
ed25519-dalek = { version = "2.1", features = ["serde"] }
sha2 = "0.10"
rand = "0.8"
x25519-dalek = { version = "2.0", features = ["static_secrets"] }
chacha20poly1305 = "0.10"
hkdf = "0.12"
# Async
tokio = { version = "1.36", features = ["sync", "time"] }
# Error handling
anyhow = "1.0"
thiserror = "1.0"
[dev-dependencies]
tokio = { version = "1.36", features = ["rt-multi-thread", "macros"] }
[[example]]
name = "fapp_service"
path = "examples/fapp_service.rs"
[[example]]
name = "housing_service"
path = "examples/housing_service.rs"
[[example]]
name = "multi_service"
path = "examples/multi_service.rs"

View File

@@ -0,0 +1,233 @@
# MeshService
A generic decentralized service layer for mesh networks. Build any peer-to-peer service following the **Announce → Query → Response → Reserve** pattern.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Application Services │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ FAPP │ │ Housing │ │ Repair │ │ Custom │ ... │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ └────────────┴────────────┴────────────┘ │
│ Service Layer (this crate) │
│ ServiceMessage, ServiceRouter, Verification │
│ ─────────────────────────────────────────────────────── │
│ Mesh Layer │
│ (provided by quicprochat-p2p or other mesh impl) │
└─────────────────────────────────────────────────────────────┘
```
## QuicProChat / quicprochat-p2p
This crate lives in the **product.quicproquo** workspace. Integration with the mesh stack:
- **Ed25519 seed**: `MeshIdentity::seed_bytes()` matches `ServiceIdentity::from_secret(&seed)` (same `ed25519-dalek` derivation as `quicprochat_core::IdentityKeypair`); truncated mesh address is SHA-256(pubkey)[0..16] in both layers.
- **Example transport**: integration test `crates/quicprochat-p2p/tests/meshservice_tcp_transport.rs` sends `wire::encode(ServiceMessage)` over `TcpTransport` (length-prefixed framing). For iroh/production, embed the same bytes in `MeshEnvelope` on ALPN `quicprochat/mesh/1`.
Run the test from the repo root:
```bash
cargo test -p quicprochat-p2p --test meshservice_tcp_transport
```
## Features
- **Generic Protocol**: Any service can be built on top (therapy appointments, housing, repairs, tutoring...)
- **Ed25519 Signatures**: All messages cryptographically signed
- **Verification Framework**: Multi-level trust (self-asserted, peer-endorsed, registry-verified)
- **Efficient Wire Format**: Fixed 64-byte header + CBOR payload
- **Pluggable Handlers**: Register custom services with the router
- **Built-in Services**: FAPP (psychotherapy) and Housing included
## Quick Start
```rust
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::fapp::{FappService, SlotAnnounce, SlotQuery, Specialism, Modality},
};
// Create identity
let identity = ServiceIdentity::generate();
// Create router with FAPP service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
// Therapist announces slots
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral],
Modality::VideoCall,
"104", // Postal prefix
)
.with_slots(3)
.with_profile("https://therapists.de/dr-mueller");
let msg = meshservice::services::fapp::create_announce(&identity, &announce, 1)?;
router.handle(msg, Some(identity.public_key()))?;
// Patient queries
let query = SlotQuery::new(Specialism::CognitiveBehavioral, "104");
let query_msg = meshservice::services::fapp::create_query(&identity, &query)?;
let matches = router.query(&query_msg);
println!("Found {} therapists", matches.len());
```
## Built-in Services
### FAPP (Free Appointment Propagation Protocol)
Decentralized psychotherapy appointment discovery:
| Service ID | Purpose |
|------------|---------|
| `0x0001` | Therapist slot announcements, patient queries |
```rust
use meshservice::services::fapp::{SlotAnnounce, Specialism, Modality};
let announce = SlotAnnounce::new(
&[Specialism::TraumaFocused, Specialism::CognitiveBehavioral],
Modality::InPerson,
"104",
)
.with_slots(2)
.with_profile("https://kbv.de/123");
```
### Housing
Decentralized room/apartment sharing:
| Service ID | Purpose |
|------------|---------|
| `0x0002` | Listing announcements, seeker queries |
```rust
use meshservice::services::housing::{ListingAnnounce, ListingType, amenities};
let listing = ListingAnnounce::new(ListingType::Apartment, 65, 850, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY);
```
## Verification Framework
Three trust levels:
| Level | Description | Example |
|-------|-------------|---------|
| 0 - None | Bare announcement | Anonymous |
| 1 - Self-Asserted | Profile URL provided | Website link |
| 2 - Peer-Endorsed | Trusted peers vouch | Community rating |
| 3 - Registry-Verified | Official registry | KBV license |
```rust
use meshservice::verification::{Verification, TrustedVerifiers, VerificationLevel};
// Add trusted verifier
let mut verifiers = TrustedVerifiers::new();
verifiers.add(registry_public_key, "KBV Registry", VerificationLevel::RegistryVerified);
router.set_trusted_verifiers(verifiers);
// Require verification for announces
router.set_min_verification_level(2);
```
## Wire Protocol
64-byte fixed header for efficient parsing:
```
0-3 service_id (u32 LE)
4 message_type (u8)
5 version (u8)
6-7 flags (reserved)
8-23 message_id (16 bytes)
24-39 sender_address (16 bytes)
40-47 sequence (u64 LE)
48-49 ttl_hours (u16 LE)
50-57 timestamp (u64 LE)
58 hop_count (u8)
59 max_hops (u8)
60-63 payload_len (u32 LE)
---
64+ signature (64 bytes)
128+ payload (CBOR)
... verifications (optional CBOR)
```
## Building Custom Services
Implement `ServiceHandler`:
```rust
use meshservice::router::{ServiceHandler, ServiceAction, HandlerContext};
struct MyService;
impl ServiceHandler for MyService {
fn service_id(&self) -> u32 { 0x8001 } // Custom range
fn name(&self) -> &str { "MyService" }
fn handle(&self, message: &ServiceMessage, ctx: &HandlerContext)
-> Result<ServiceAction, ServiceError>
{
match message.message_type {
MessageType::Announce => Ok(ServiceAction::StoreAndForward),
MessageType::Query => {
// Find matches, respond...
Ok(ServiceAction::Handled)
}
_ => Ok(ServiceAction::Drop)
}
}
fn matches_query(&self, announce: &StoredMessage, query: &ServiceMessage) -> bool {
// Custom matching logic
true
}
}
```
## Service IDs
| ID | Service |
|----|---------|
| `0x0001` | FAPP (Psychotherapy) |
| `0x0002` | Housing |
| `0x0003` | Repair |
| `0x0004` | Tutoring |
| `0x0005` | Medical |
| `0x0006` | Legal |
| `0x0007` | Volunteer |
| `0x0008` | Events |
| `0x8000+` | Custom/User-defined |
## Examples
```bash
# FAPP demo (therapist + patient)
cargo run --example fapp_service
# Housing demo (landlord + seeker)
cargo run --example housing_service
# Multi-service mesh
cargo run --example multi_service
```
## Testing
```bash
cargo test
```
## License
MIT

View File

@@ -0,0 +1,86 @@
//! FAPP Service Demo
//!
//! Demonstrates therapist announcement and patient query flow.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::fapp::{create_announce, create_query, FappService, Modality, SlotAnnounce, SlotQuery, Specialism},
};
fn main() {
println!("=== FAPP Service Demo ===\n");
// Create identities
let therapist = ServiceIdentity::generate();
let patient = ServiceIdentity::generate();
let relay = ServiceIdentity::generate();
println!("Therapist address: {:?}", hex(&therapist.address()));
println!("Patient address: {:?}", hex(&patient.address()));
println!("Relay address: {:?}\n", hex(&relay.address()));
// Create router with FAPP service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
// Therapist creates announcement
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral, Specialism::TraumaFocused],
Modality::VideoCall,
"104", // Berlin Kreuzberg
)
.with_slots(3)
.with_profile("https://therapists.de/dr-schmidt")
.with_name("Dr. Anna Schmidt");
println!("Therapist announces:");
println!(" Specialisms: CBT, Trauma");
println!(" Modality: Video");
println!(" Location: 104xx");
println!(" Slots: 3");
println!(" Profile: https://therapists.de/dr-schmidt\n");
let msg = create_announce(&therapist, &announce, 1).unwrap();
let action = router.handle(msg.clone(), Some(therapist.public_key())).unwrap();
println!("Router action: {:?}", action);
println!("Stored messages: {}\n", router.store().len());
// Patient creates query
let query = SlotQuery::new(Specialism::CognitiveBehavioral, "104")
.with_modality(Modality::VideoCall)
.with_max_wait(30);
println!("Patient queries:");
println!(" Looking for: CBT");
println!(" Location: 104xx");
println!(" Modality: Video");
println!(" Max wait: 30 days\n");
let query_msg = create_query(&patient, &query).unwrap();
// Find matches
let matches = router.query(&query_msg);
println!("Found {} matching therapist(s):", matches.len());
for (i, m) in matches.iter().enumerate() {
if let Ok(data) = meshservice::services::fapp::SlotAnnounce::from_bytes(&m.message.payload) {
println!(" {}. {} in {}xx ({} slots)",
i + 1,
data.display_name.as_deref().unwrap_or("Unknown"),
data.postal_prefix,
data.available_slots
);
if let Some(profile) = &data.profile_url {
println!(" Verify: {}", profile);
}
}
}
println!("\n=== Demo Complete ===");
}
fn hex(bytes: &[u8]) -> String {
bytes.iter().map(|b| format!("{b:02x}")).collect()
}

View File

@@ -0,0 +1,97 @@
//! Housing Service Demo
//!
//! Demonstrates landlord listing and seeker query flow.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
services::housing::{
amenities, create_announce, create_query, HousingService, ListingAnnounce, ListingQuery,
ListingType,
},
};
fn main() {
println!("=== Housing Service Demo ===\n");
// Create identities
let landlord1 = ServiceIdentity::generate();
let landlord2 = ServiceIdentity::generate();
let seeker = ServiceIdentity::generate();
// Create router with Housing service
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(HousingService::relay()));
// Landlord 1: Kreuzberg apartment
let listing1 = ListingAnnounce::new(ListingType::Apartment, 65, 950, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY | amenities::INTERNET)
.with_title("Sunny 2-room in Kreuzberg");
println!("Landlord 1 announces:");
println!(" {} sqm {} in {}xx", listing1.size_sqm, "Apartment", listing1.postal_prefix);
println!(" Rent: {} EUR/month", listing1.rent_euros());
println!(" Rooms: {}", listing1.rooms);
println!(" Amenities: Furnished, Balcony, Internet\n");
let msg1 = create_announce(&landlord1, &listing1, 1).unwrap();
router.handle(msg1, Some(landlord1.public_key())).unwrap();
// Landlord 2: Neukölln shared flat room
let listing2 = ListingAnnounce::new(ListingType::Room, 18, 450, "120")
.with_rooms(1)
.with_amenities(amenities::WASHING_MACHINE | amenities::INTERNET)
.with_title("Room in friendly WG");
println!("Landlord 2 announces:");
println!(" {} sqm {} in {}xx", listing2.size_sqm, "Room", listing2.postal_prefix);
println!(" Rent: {} EUR/month", listing2.rent_euros());
println!(" Amenities: Washing machine, Internet\n");
let msg2 = create_announce(&landlord2, &listing2, 1).unwrap();
router.handle(msg2, Some(landlord2.public_key())).unwrap();
println!("Total listings in store: {}\n", router.store().len());
// Seeker 1: Looking for affordable apartment
println!("--- Seeker Query 1: Affordable apartment ---");
let query1 = ListingQuery::new("10", 800) // Any 10xxx area, max 800 EUR
.with_type(ListingType::Apartment)
.with_min_size(40);
println!(" Area: 10xxx");
println!(" Type: Apartment");
println!(" Max rent: 800 EUR");
println!(" Min size: 40 sqm\n");
let query_msg1 = create_query(&seeker, &query1).unwrap();
let matches1 = router.query(&query_msg1);
println!("Found {} matches:", matches1.len());
for m in &matches1 {
if let Ok(l) = ListingAnnounce::from_bytes(&m.message.payload) {
println!(" - {} ({}xx, {} EUR)", l.title.as_deref().unwrap_or("No title"), l.postal_prefix, l.rent_euros());
}
}
// Seeker 2: Looking for any cheap room
println!("\n--- Seeker Query 2: Any room under 500 EUR ---");
let query2 = ListingQuery::new("1", 500); // Any 1xxxx area
let query_msg2 = create_query(&seeker, &query2).unwrap();
let matches2 = router.query(&query_msg2);
println!("Found {} matches:", matches2.len());
for m in &matches2 {
if let Ok(l) = ListingAnnounce::from_bytes(&m.message.payload) {
println!(" - {} ({}xx, {} sqm, {} EUR)",
l.title.as_deref().unwrap_or("No title"),
l.postal_prefix,
l.size_sqm,
l.rent_euros()
);
}
}
println!("\n=== Demo Complete ===");
}

View File

@@ -0,0 +1,89 @@
//! Multi-Service Demo
//!
//! Shows how multiple services can run on the same mesh router.
use meshservice::{
capabilities,
identity::ServiceIdentity,
router::ServiceRouter,
service_ids,
services::{
fapp::{create_announce as fapp_announce, FappService, Modality, SlotAnnounce, Specialism},
housing::{
amenities, create_announce as housing_announce, HousingService, ListingAnnounce,
ListingType,
},
},
verification::{TrustedVerifiers, Verification, VerificationLevel},
};
fn main() {
println!("=== Multi-Service Mesh Demo ===\n");
// Create a router that handles both FAPP and Housing
let mut router = ServiceRouter::new(capabilities::RELAY | capabilities::CONSUMER);
router.register(Box::new(FappService::relay()));
router.register(Box::new(HousingService::relay()));
println!("Registered services:");
for (id, name) in router.services() {
println!(" 0x{:04x} - {}", id, name);
}
println!();
// Create identities
let therapist = ServiceIdentity::generate();
let landlord = ServiceIdentity::generate();
let registry = ServiceIdentity::generate();
// Setup trusted verifiers
let mut verifiers = TrustedVerifiers::new();
verifiers.add(
registry.public_key(),
"Health Registry",
VerificationLevel::RegistryVerified,
);
router.set_trusted_verifiers(verifiers);
// Therapist announcement with verification
println!("--- Adding FAPP announcement ---");
let fapp_data = SlotAnnounce::new(&[Specialism::Psychoanalysis], Modality::InPerson, "104")
.with_profile("https://kbv.de/therapists/12345");
let mut fapp_msg = fapp_announce(&therapist, &fapp_data, 1).unwrap();
// Registry verifies therapist
let verification = Verification::registry(
&registry,
&therapist.address(),
"licensed_therapist",
"KBV-12345",
);
fapp_msg.add_verification(verification);
router.handle(fapp_msg, Some(therapist.public_key())).unwrap();
println!("FAPP announcement stored (with registry verification)\n");
// Housing announcement
println!("--- Adding Housing announcement ---");
let housing_data = ListingAnnounce::new(ListingType::Studio, 35, 700, "104")
.with_amenities(amenities::FURNISHED | amenities::INTERNET)
.with_title("Cozy studio near therapist offices");
let housing_msg = housing_announce(&landlord, &housing_data, 1).unwrap();
router.handle(housing_msg, Some(landlord.public_key())).unwrap();
println!("Housing announcement stored\n");
// Summary
println!("--- Store Summary ---");
println!("FAPP messages: {}", router.store().service_count(service_ids::FAPP));
println!("Housing messages: {}", router.store().service_count(service_ids::HOUSING));
println!("Total messages: {}", router.store().len());
println!("\n=== Multi-Service Demo Complete ===");
println!("\nThe mesh can route and store messages for multiple services");
println!("using a single router instance. Each service has its own:");
println!(" - Payload format");
println!(" - Query matching logic");
println!(" - Handler implementation");
}

View File

@@ -0,0 +1,532 @@
//! Anti-abuse mechanisms for preventing slot blocking and spam.
use std::collections::HashMap;
use std::time::{SystemTime, UNIX_EPOCH};
use sha2::{Digest, Sha256};
/// Rate limiting configuration.
#[derive(Debug, Clone)]
pub struct RateLimits {
/// Max reservations per sender per hour.
pub max_reservations_per_hour: u8,
/// Max pending (unconfirmed) reservations per sender.
pub max_pending_reservations: u8,
/// Min time between reservations (seconds).
pub reservation_cooldown_secs: u32,
/// Max queries per sender per minute.
pub max_queries_per_minute: u8,
}
impl Default for RateLimits {
fn default() -> Self {
Self {
max_reservations_per_hour: 3,
max_pending_reservations: 2,
reservation_cooldown_secs: 300,
max_queries_per_minute: 10,
}
}
}
/// Tracks sender activity for rate limiting.
#[derive(Debug, Default)]
pub struct RateLimiter {
limits: RateLimits,
/// sender_address -> activity
activity: HashMap<[u8; 16], SenderActivity>,
}
#[derive(Debug, Default)]
struct SenderActivity {
/// Timestamps of reservations in last hour.
reservation_times: Vec<u64>,
/// Count of pending reservations.
pending_count: u8,
/// Timestamp of last reservation.
last_reservation: u64,
/// Query timestamps in last minute.
query_times: Vec<u64>,
}
impl RateLimiter {
/// Create with default limits.
pub fn new() -> Self {
Self::default()
}
/// Create with custom limits.
pub fn with_limits(limits: RateLimits) -> Self {
Self {
limits,
activity: HashMap::new(),
}
}
/// Check if a reservation is allowed.
pub fn check_reservation(&mut self, sender: &[u8; 16]) -> RateLimitResult {
let now = now();
let activity = self.activity.entry(*sender).or_default();
// Clean old entries
activity.reservation_times.retain(|&t| now - t < 3600);
// Check cooldown
if now - activity.last_reservation < u64::from(self.limits.reservation_cooldown_secs) {
return RateLimitResult::Cooldown {
wait_secs: self.limits.reservation_cooldown_secs - (now - activity.last_reservation) as u32,
};
}
// Check hourly limit
if activity.reservation_times.len() >= self.limits.max_reservations_per_hour as usize {
return RateLimitResult::HourlyLimitReached;
}
// Check pending limit
if activity.pending_count >= self.limits.max_pending_reservations {
return RateLimitResult::TooManyPending;
}
RateLimitResult::Allowed
}
/// Record a reservation attempt.
pub fn record_reservation(&mut self, sender: &[u8; 16]) {
let now = now();
let activity = self.activity.entry(*sender).or_default();
activity.reservation_times.push(now);
activity.last_reservation = now;
activity.pending_count = activity.pending_count.saturating_add(1);
}
/// Record reservation confirmed/completed (reduce pending).
pub fn record_reservation_resolved(&mut self, sender: &[u8; 16]) {
if let Some(activity) = self.activity.get_mut(sender) {
activity.pending_count = activity.pending_count.saturating_sub(1);
}
}
/// Check if a query is allowed.
pub fn check_query(&mut self, sender: &[u8; 16]) -> RateLimitResult {
let now = now();
let activity = self.activity.entry(*sender).or_default();
// Clean old entries
activity.query_times.retain(|&t| now - t < 60);
if activity.query_times.len() >= self.limits.max_queries_per_minute as usize {
return RateLimitResult::QueryLimitReached;
}
RateLimitResult::Allowed
}
/// Record a query.
pub fn record_query(&mut self, sender: &[u8; 16]) {
let now = now();
let activity = self.activity.entry(*sender).or_default();
activity.query_times.push(now);
}
/// Prune old activity data.
pub fn prune(&mut self) {
let now = now();
self.activity.retain(|_, a| {
a.reservation_times.retain(|&t| now - t < 3600);
a.query_times.retain(|&t| now - t < 60);
!a.reservation_times.is_empty() || !a.query_times.is_empty() || a.pending_count > 0
});
}
}
/// Result of rate limit check.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum RateLimitResult {
/// Request allowed.
Allowed,
/// Must wait before next reservation.
Cooldown { wait_secs: u32 },
/// Hourly reservation limit reached.
HourlyLimitReached,
/// Too many pending reservations.
TooManyPending,
/// Query rate limit reached.
QueryLimitReached,
}
impl RateLimitResult {
pub fn is_allowed(&self) -> bool {
matches!(self, RateLimitResult::Allowed)
}
}
/// Proof-of-work for reservation requests.
#[derive(Debug, Clone)]
pub struct ProofOfWork {
/// Nonce that produces valid hash.
pub nonce: u64,
/// Required difficulty (leading zero bits).
pub difficulty: u8,
}
impl ProofOfWork {
/// Default difficulty (20 bits ≈ 1-2 seconds on modern CPU).
pub const DEFAULT_DIFFICULTY: u8 = 20;
/// Generate proof-of-work for a reservation.
pub fn generate(reservation_id: &[u8; 16], difficulty: u8) -> Self {
let mut nonce = 0u64;
loop {
if Self::check_hash(reservation_id, nonce, difficulty) {
return Self { nonce, difficulty };
}
nonce = nonce.wrapping_add(1);
}
}
/// Verify proof-of-work.
pub fn verify(&self, reservation_id: &[u8; 16]) -> bool {
Self::check_hash(reservation_id, self.nonce, self.difficulty)
}
fn check_hash(reservation_id: &[u8; 16], nonce: u64, difficulty: u8) -> bool {
let mut hasher = Sha256::new();
hasher.update(reservation_id);
hasher.update(&nonce.to_le_bytes());
let hash = hasher.finalize();
leading_zero_bits(&hash) >= difficulty
}
}
/// Count leading zero bits in a byte slice.
fn leading_zero_bits(data: &[u8]) -> u8 {
let mut count = 0u8;
for byte in data {
if *byte == 0 {
count += 8;
} else {
count += byte.leading_zeros() as u8;
break;
}
}
count
}
/// Sender reputation tracking.
#[derive(Debug, Clone, Default)]
pub struct SenderReputation {
pub address: [u8; 16],
pub reservations_made: u32,
pub reservations_honored: u32,
pub reservations_cancelled: u32,
pub no_shows: u32,
pub last_no_show: Option<u64>,
}
impl SenderReputation {
/// Create for a new sender.
pub fn new(address: [u8; 16]) -> Self {
Self {
address,
..Default::default()
}
}
/// Calculate honor rate (0.0 to 1.0).
pub fn honor_rate(&self) -> f32 {
if self.reservations_made == 0 {
return 0.5; // Neutral for new users
}
(self.reservations_honored as f32) / (self.reservations_made as f32)
}
/// Check if sender should be blocked.
pub fn is_blocked(&self) -> bool {
self.no_shows >= 3 || (self.reservations_made >= 5 && self.honor_rate() < 0.5)
}
/// Record a completed reservation.
pub fn record_honored(&mut self) {
self.reservations_made += 1;
self.reservations_honored += 1;
}
/// Record a cancelled reservation (with notice).
pub fn record_cancelled(&mut self) {
self.reservations_made += 1;
self.reservations_cancelled += 1;
}
/// Record a no-show.
pub fn record_no_show(&mut self) {
self.reservations_made += 1;
self.no_shows += 1;
self.last_no_show = Some(now());
}
}
/// Reputation store.
#[derive(Debug, Default)]
pub struct ReputationStore {
reputations: HashMap<[u8; 16], SenderReputation>,
}
impl ReputationStore {
pub fn new() -> Self {
Self::default()
}
/// Get or create reputation for a sender.
pub fn get_or_create(&mut self, address: [u8; 16]) -> &mut SenderReputation {
self.reputations
.entry(address)
.or_insert_with(|| SenderReputation::new(address))
}
/// Get reputation (read-only).
pub fn get(&self, address: &[u8; 16]) -> Option<&SenderReputation> {
self.reputations.get(address)
}
/// Check if sender is blocked.
pub fn is_blocked(&self, address: &[u8; 16]) -> bool {
self.reputations
.get(address)
.map(|r| r.is_blocked())
.unwrap_or(false)
}
/// Get honor rate (0.5 for unknown).
pub fn honor_rate(&self, address: &[u8; 16]) -> f32 {
self.reputations
.get(address)
.map(|r| r.honor_rate())
.unwrap_or(0.5)
}
}
/// Blocklist entry.
#[derive(Debug, Clone)]
pub struct BlocklistEntry {
pub blocked_address: [u8; 16],
pub reason: BlockReason,
pub reported_by: [u8; 16],
pub signature: Vec<u8>,
pub timestamp: u64,
}
/// Reason for blocking.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum BlockReason {
NoShow = 1,
Spam = 2,
Harassment = 3,
FakeIdentity = 4,
}
/// Therapist-defined reservation policy.
#[derive(Debug, Clone)]
pub struct TherapistPolicy {
/// Max pending reservations from new senders.
pub max_pending_new: u8,
/// Max pending from established senders.
pub max_pending_established: u8,
/// Require this verification level for reservations.
pub min_verification_level: u8,
/// Auto-reject senders with honor rate below this.
pub min_honor_rate: f32,
/// Require proof-of-work.
pub require_pow: bool,
/// PoW difficulty (if required).
pub pow_difficulty: u8,
}
impl Default for TherapistPolicy {
fn default() -> Self {
Self {
max_pending_new: 1,
max_pending_established: 3,
min_verification_level: 0,
min_honor_rate: 0.5,
require_pow: true,
pow_difficulty: ProofOfWork::DEFAULT_DIFFICULTY,
}
}
}
impl TherapistPolicy {
/// Check if a reservation request meets policy.
pub fn check(
&self,
sender_reputation: &SenderReputation,
sender_verification_level: u8,
pow: Option<&ProofOfWork>,
reservation_id: &[u8; 16],
) -> PolicyResult {
// Check verification level
if sender_verification_level < self.min_verification_level {
return PolicyResult::InsufficientVerification;
}
// Check honor rate
if sender_reputation.honor_rate() < self.min_honor_rate {
return PolicyResult::LowReputation;
}
// Check blocked
if sender_reputation.is_blocked() {
return PolicyResult::Blocked;
}
// Check proof-of-work
if self.require_pow {
match pow {
Some(p) if p.difficulty >= self.pow_difficulty && p.verify(reservation_id) => {}
Some(_) => return PolicyResult::InvalidPoW,
None => return PolicyResult::MissingPoW,
}
}
PolicyResult::Allowed
}
}
/// Result of policy check.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum PolicyResult {
Allowed,
InsufficientVerification,
LowReputation,
Blocked,
MissingPoW,
InvalidPoW,
}
impl PolicyResult {
pub fn is_allowed(&self) -> bool {
matches!(self, PolicyResult::Allowed)
}
}
fn now() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn rate_limiter_allows_first_reservation() {
let mut limiter = RateLimiter::new();
let sender = [1u8; 16];
assert!(limiter.check_reservation(&sender).is_allowed());
}
#[test]
fn rate_limiter_enforces_cooldown() {
let mut limiter = RateLimiter::with_limits(RateLimits {
reservation_cooldown_secs: 300,
..Default::default()
});
let sender = [2u8; 16];
limiter.record_reservation(&sender);
let result = limiter.check_reservation(&sender);
assert!(matches!(result, RateLimitResult::Cooldown { .. }));
}
#[test]
fn rate_limiter_enforces_hourly_limit() {
let mut limiter = RateLimiter::with_limits(RateLimits {
max_reservations_per_hour: 2,
reservation_cooldown_secs: 0,
..Default::default()
});
let sender = [3u8; 16];
limiter.record_reservation(&sender);
limiter.record_reservation(&sender);
assert_eq!(limiter.check_reservation(&sender), RateLimitResult::HourlyLimitReached);
}
#[test]
fn pow_generation_and_verification() {
let reservation_id = [42u8; 16];
let pow = ProofOfWork::generate(&reservation_id, 8); // Low difficulty for test
assert!(pow.verify(&reservation_id));
assert!(!pow.verify(&[0u8; 16])); // Wrong ID
}
#[test]
fn reputation_tracking() {
let mut rep = SenderReputation::new([5u8; 16]);
rep.record_honored();
rep.record_honored();
rep.record_no_show();
assert_eq!(rep.reservations_made, 3);
assert_eq!(rep.honor_rate(), 2.0 / 3.0);
assert!(!rep.is_blocked());
rep.record_no_show();
rep.record_no_show();
assert!(rep.is_blocked()); // 3 no-shows
}
#[test]
fn policy_check_pow() {
let policy = TherapistPolicy {
require_pow: true,
pow_difficulty: 8,
..Default::default()
};
let rep = SenderReputation::new([6u8; 16]);
let reservation_id = [7u8; 16];
// No PoW
assert_eq!(
policy.check(&rep, 0, None, &reservation_id),
PolicyResult::MissingPoW
);
// Valid PoW
let pow = ProofOfWork::generate(&reservation_id, 8);
assert_eq!(
policy.check(&rep, 0, Some(&pow), &reservation_id),
PolicyResult::Allowed
);
}
#[test]
fn policy_check_verification_level() {
let policy = TherapistPolicy {
min_verification_level: 2,
require_pow: false,
..Default::default()
};
let rep = SenderReputation::new([8u8; 16]);
let reservation_id = [9u8; 16];
assert_eq!(
policy.check(&rep, 1, None, &reservation_id),
PolicyResult::InsufficientVerification
);
assert_eq!(
policy.check(&rep, 2, None, &reservation_id),
PolicyResult::Allowed
);
}
}

View File

@@ -0,0 +1,392 @@
//! End-to-end encryption for service message payloads.
//!
//! Uses X25519 key agreement + HKDF-SHA256 key derivation + ChaCha20-Poly1305 AEAD.
//! Encryption is opt-in per message: the sender encrypts the payload before
//! constructing the `ServiceMessage`, and the recipient decrypts after receiving.
//!
//! ## Key model
//!
//! Each `ServiceIdentity` (Ed25519) can derive an X25519 keypair for encryption.
//! - Sender generates an ephemeral X25519 key per message (forward secrecy).
//! - Shared secret is computed via X25519 DH with the recipient's public key.
//! - HKDF derives a per-message encryption key.
//! - ChaCha20-Poly1305 encrypts the payload with a random nonce.
//!
//! ## Wire format of encrypted payload
//!
//! ```text
//! [1 byte: version = 0x01]
//! [32 bytes: sender ephemeral X25519 public key]
//! [12 bytes: nonce]
//! [N bytes: ciphertext + 16-byte Poly1305 tag]
//! ```
use chacha20poly1305::aead::{Aead, KeyInit};
use chacha20poly1305::{ChaCha20Poly1305, Nonce};
use hkdf::Hkdf;
use rand::rngs::OsRng;
use rand::RngCore;
use x25519_dalek::{PublicKey as X25519Public, StaticSecret};
use crate::error::ServiceError;
use crate::identity::ServiceIdentity;
/// Current encrypted payload version byte.
const ENCRYPTED_VERSION: u8 = 0x01;
/// Overhead: 1 (version) + 32 (ephemeral pubkey) + 12 (nonce) + 16 (tag).
const ENCRYPTION_OVERHEAD: usize = 1 + 32 + 12 + 16;
/// X25519 keypair derived from a `ServiceIdentity` for encryption.
///
/// The Ed25519 seed is reused as the X25519 static secret. This is the
/// standard Ed25519-to-X25519 conversion used by libsodium and others.
pub struct EncryptionKeyPair {
secret: StaticSecret,
public: X25519Public,
}
impl EncryptionKeyPair {
/// Derive an encryption keypair from a `ServiceIdentity`.
pub fn from_identity(identity: &ServiceIdentity) -> Self {
let secret = StaticSecret::from(identity.secret_key());
let public = X25519Public::from(&secret);
Self { secret, public }
}
/// Get the X25519 public key bytes (advertise to peers for encryption).
pub fn public_bytes(&self) -> [u8; 32] {
self.public.to_bytes()
}
/// Encrypt a plaintext payload for a specific recipient.
///
/// Uses a fresh ephemeral key for forward secrecy: even if the sender's
/// long-term key is compromised, past messages remain confidential.
pub fn encrypt_for(
&self,
recipient_x25519_public: &[u8; 32],
plaintext: &[u8],
) -> Result<Vec<u8>, ServiceError> {
// Generate ephemeral keypair for this message
let eph_secret = StaticSecret::random_from_rng(OsRng);
let eph_public = X25519Public::from(&eph_secret);
// X25519 DH with recipient
let recipient_pub = X25519Public::from(*recipient_x25519_public);
let shared = eph_secret.diffie_hellman(&recipient_pub);
// Derive encryption key via HKDF
let key = derive_key(shared.as_bytes(), b"meshservice-e2e-v1");
// Encrypt with ChaCha20-Poly1305
let cipher = ChaCha20Poly1305::new((&key).into());
let mut nonce_bytes = [0u8; 12];
OsRng.fill_bytes(&mut nonce_bytes);
let nonce = Nonce::from_slice(&nonce_bytes);
let ciphertext = cipher
.encrypt(nonce, plaintext)
.map_err(|_| ServiceError::Crypto("encryption failed".into()))?;
// Assemble: version || ephemeral_public || nonce || ciphertext+tag
let mut out = Vec::with_capacity(ENCRYPTION_OVERHEAD + plaintext.len());
out.push(ENCRYPTED_VERSION);
out.extend_from_slice(&eph_public.to_bytes());
out.extend_from_slice(&nonce_bytes);
out.extend_from_slice(&ciphertext);
Ok(out)
}
/// Decrypt an encrypted payload sent to us.
///
/// Extracts the sender's ephemeral public key from the payload, computes
/// the shared secret with our static X25519 key, and decrypts.
pub fn decrypt(&self, encrypted: &[u8]) -> Result<Vec<u8>, ServiceError> {
if encrypted.len() < ENCRYPTION_OVERHEAD {
return Err(ServiceError::Crypto("ciphertext too short".into()));
}
let version = encrypted[0];
if version != ENCRYPTED_VERSION {
return Err(ServiceError::Crypto(format!(
"unsupported encryption version: {version}"
)));
}
let eph_public_bytes: [u8; 32] = encrypted[1..33]
.try_into()
.map_err(|_| ServiceError::Crypto("invalid ephemeral key".into()))?;
let nonce_bytes: [u8; 12] = encrypted[33..45]
.try_into()
.map_err(|_| ServiceError::Crypto("invalid nonce".into()))?;
let ciphertext = &encrypted[45..];
// X25519 DH with sender's ephemeral key
let eph_public = X25519Public::from(eph_public_bytes);
let shared = self.secret.diffie_hellman(&eph_public);
// Derive decryption key
let key = derive_key(shared.as_bytes(), b"meshservice-e2e-v1");
// Decrypt
let cipher = ChaCha20Poly1305::new((&key).into());
let nonce = Nonce::from_slice(&nonce_bytes);
cipher
.decrypt(nonce, ciphertext)
.map_err(|_| ServiceError::Crypto("decryption failed".into()))
}
}
/// Derive a 32-byte key from a shared secret using HKDF-SHA256.
fn derive_key(shared_secret: &[u8], info: &[u8]) -> [u8; 32] {
let hk = Hkdf::<sha2::Sha256>::new(None, shared_secret);
let mut key = [0u8; 32];
hk.expand(info, &mut key)
.expect("HKDF expand to 32 bytes should never fail");
key
}
/// Check whether a payload appears to be encrypted (starts with version byte
/// and has minimum length).
pub fn is_encrypted_payload(payload: &[u8]) -> bool {
payload.len() >= ENCRYPTION_OVERHEAD && payload[0] == ENCRYPTED_VERSION
}
/// Return the encryption overhead in bytes (useful for size budgets on
/// constrained transports like LoRa).
pub const fn encryption_overhead() -> usize {
ENCRYPTION_OVERHEAD
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn encrypt_decrypt_roundtrip() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"Hello, encrypted mesh world!";
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
let decrypted = recipient_keys.decrypt(&encrypted).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn wrong_recipient_cannot_decrypt() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let wrong_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let wrong_keys = EncryptionKeyPair::from_identity(&wrong_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"secret data")
.expect("encrypt");
let result = wrong_keys.decrypt(&encrypted);
assert!(result.is_err());
}
#[test]
fn tampered_ciphertext_fails() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let mut encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"do not tamper")
.expect("encrypt");
// Flip a byte in the ciphertext portion
let last = encrypted.len() - 1;
encrypted[last] ^= 0xff;
let result = recipient_keys.decrypt(&encrypted);
assert!(result.is_err());
}
#[test]
fn truncated_ciphertext_rejected() {
let recipient_id = ServiceIdentity::generate();
let keys = EncryptionKeyPair::from_identity(&recipient_id);
let result = keys.decrypt(&[0x01; 10]);
assert!(result.is_err());
}
#[test]
fn bad_version_rejected() {
let recipient_id = ServiceIdentity::generate();
let keys = EncryptionKeyPair::from_identity(&recipient_id);
// Valid length but wrong version
let mut fake = vec![0x99u8; ENCRYPTION_OVERHEAD + 10];
fake[0] = 0x99;
let result = keys.decrypt(&fake);
assert!(result.is_err());
}
#[test]
fn each_encryption_produces_different_ciphertext() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"same message twice";
let enc1 = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt 1");
let enc2 = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt 2");
// Different ephemeral keys + nonces => different ciphertext
assert_ne!(enc1, enc2);
// Both decrypt to the same plaintext
let dec1 = recipient_keys.decrypt(&enc1).expect("decrypt 1");
let dec2 = recipient_keys.decrypt(&enc2).expect("decrypt 2");
assert_eq!(dec1, plaintext);
assert_eq!(dec2, plaintext);
}
#[test]
fn empty_plaintext_roundtrip() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"")
.expect("encrypt empty");
assert_eq!(encrypted.len(), ENCRYPTION_OVERHEAD);
let decrypted = recipient_keys.decrypt(&encrypted).expect("decrypt empty");
assert!(decrypted.is_empty());
}
#[test]
fn is_encrypted_payload_detection() {
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let encrypted = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), b"test")
.expect("encrypt");
assert!(is_encrypted_payload(&encrypted));
assert!(!is_encrypted_payload(b"plain text"));
assert!(!is_encrypted_payload(&[]));
}
#[test]
fn public_bytes_deterministic() {
let id = ServiceIdentity::generate();
let keys1 = EncryptionKeyPair::from_identity(&id);
let keys2 = EncryptionKeyPair::from_identity(&id);
assert_eq!(keys1.public_bytes(), keys2.public_bytes());
}
#[test]
fn encrypt_decrypt_with_service_message() {
// Full integration: encrypt payload, wrap in ServiceMessage, decrypt
use crate::message::ServiceMessage;
use crate::service_ids::FAPP;
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
// Encrypt the payload before creating the message
let plaintext = b"confidential appointment details";
let encrypted_payload = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
// Create a signed service message with the encrypted payload
let msg = ServiceMessage::new(
&sender_id,
FAPP,
crate::message::MessageType::Reserve,
encrypted_payload.clone(),
1,
);
// Verify the message signature still works (signs over encrypted payload)
assert!(msg.verify(&sender_id.public_key()));
// Recipient decrypts the payload
let decrypted = recipient_keys.decrypt(&msg.payload).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn encrypt_decrypt_wire_roundtrip() {
// Full wire roundtrip: encrypt -> sign -> encode -> decode -> verify -> decrypt
use crate::message::ServiceMessage;
use crate::service_ids::FAPP;
use crate::wire;
let sender_id = ServiceIdentity::generate();
let recipient_id = ServiceIdentity::generate();
let sender_keys = EncryptionKeyPair::from_identity(&sender_id);
let recipient_keys = EncryptionKeyPair::from_identity(&recipient_id);
let plaintext = b"sensitive medical data over the mesh";
let encrypted_payload = sender_keys
.encrypt_for(&recipient_keys.public_bytes(), plaintext)
.expect("encrypt");
let msg = ServiceMessage::new(
&sender_id,
FAPP,
crate::message::MessageType::Reserve,
encrypted_payload,
42,
);
// Encode to wire format
let wire_bytes = wire::encode(&msg).expect("encode");
// Decode from wire format
let decoded = wire::decode(&wire_bytes).expect("decode");
// Verify signature
assert!(decoded.verify(&sender_id.public_key()));
// Decrypt payload
let decrypted = recipient_keys.decrypt(&decoded.payload).expect("decrypt");
assert_eq!(decrypted, plaintext);
}
#[test]
fn encryption_overhead_constant() {
assert_eq!(encryption_overhead(), 61);
}
}

View File

@@ -0,0 +1,55 @@
//! Error types for the mesh service layer.
use thiserror::Error;
/// Errors that can occur in the service layer.
#[derive(Debug, Error)]
pub enum ServiceError {
#[error("invalid message format: {0}")]
InvalidFormat(String),
#[error("unknown service ID: {0}")]
UnknownService(u32),
#[error("signature verification failed")]
SignatureInvalid,
#[error("message expired")]
Expired,
#[error("max hops exceeded")]
MaxHopsExceeded,
#[error("missing capability: {0}")]
MissingCapability(String),
#[error("store full")]
StoreFull,
#[error("duplicate message")]
Duplicate,
#[error("serialization error: {0}")]
Serialization(String),
#[error("crypto error: {0}")]
Crypto(String),
#[error("verification required: minimum level {0}")]
VerificationRequired(u8),
#[error("service handler error: {0}")]
Handler(String),
}
impl From<ciborium::ser::Error<std::io::Error>> for ServiceError {
fn from(e: ciborium::ser::Error<std::io::Error>) -> Self {
ServiceError::Serialization(e.to_string())
}
}
impl From<ciborium::de::Error<std::io::Error>> for ServiceError {
fn from(e: ciborium::de::Error<std::io::Error>) -> Self {
ServiceError::Serialization(e.to_string())
}
}

View File

@@ -0,0 +1,119 @@
//! Service identity management using Ed25519.
use ed25519_dalek::{Signature, Signer, SigningKey, Verifier, VerifyingKey};
use rand::rngs::OsRng;
use sha2::{Digest, Sha256};
/// A service participant's identity (Ed25519 keypair).
#[derive(Clone)]
pub struct ServiceIdentity {
signing_key: SigningKey,
}
impl ServiceIdentity {
/// Generate a new random identity.
pub fn generate() -> Self {
use rand::RngCore;
let mut secret = [0u8; 32];
OsRng.fill_bytes(&mut secret);
let signing_key = SigningKey::from_bytes(&secret);
Self { signing_key }
}
/// Create from an existing secret key.
pub fn from_secret(secret: &[u8; 32]) -> Self {
let signing_key = SigningKey::from_bytes(secret);
Self { signing_key }
}
/// Get the 32-byte public key.
pub fn public_key(&self) -> [u8; 32] {
self.signing_key.verifying_key().to_bytes()
}
/// Get the 32-byte secret key (for persistence).
pub fn secret_key(&self) -> [u8; 32] {
self.signing_key.to_bytes()
}
/// Compute the 16-byte mesh address from the public key.
pub fn address(&self) -> [u8; 16] {
compute_address(&self.public_key())
}
/// Sign a message.
pub fn sign(&self, message: &[u8]) -> [u8; 64] {
let sig = self.signing_key.sign(message);
sig.to_bytes()
}
/// Verify a signature against a public key.
pub fn verify(public_key: &[u8; 32], message: &[u8], signature: &[u8; 64]) -> bool {
let Ok(verifying_key) = VerifyingKey::from_bytes(public_key) else {
return false;
};
let sig = Signature::from_bytes(signature);
verifying_key.verify(message, &sig).is_ok()
}
}
/// Compute a 16-byte mesh address from a 32-byte public key.
///
/// Address = SHA-256(public_key)[0..16]
pub fn compute_address(public_key: &[u8; 32]) -> [u8; 16] {
let hash = Sha256::digest(public_key);
let mut addr = [0u8; 16];
addr.copy_from_slice(&hash[..16]);
addr
}
impl std::fmt::Debug for ServiceIdentity {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ServiceIdentity")
.field("address", &hex::encode(self.address()))
.finish()
}
}
// Hex encoding for debug output
mod hex {
pub fn encode(bytes: impl AsRef<[u8]>) -> String {
bytes.as_ref().iter().map(|b| format!("{b:02x}")).collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn generate_and_sign() {
let id = ServiceIdentity::generate();
let msg = b"hello world";
let sig = id.sign(msg);
assert!(ServiceIdentity::verify(&id.public_key(), msg, &sig));
}
#[test]
fn address_is_deterministic() {
let id = ServiceIdentity::generate();
let addr1 = id.address();
let addr2 = compute_address(&id.public_key());
assert_eq!(addr1, addr2);
}
#[test]
fn wrong_message_fails() {
let id = ServiceIdentity::generate();
let sig = id.sign(b"correct");
assert!(!ServiceIdentity::verify(&id.public_key(), b"wrong", &sig));
}
#[test]
fn roundtrip_secret() {
let id = ServiceIdentity::generate();
let secret = id.secret_key();
let restored = ServiceIdentity::from_secret(&secret);
assert_eq!(id.public_key(), restored.public_key());
}
}

View File

@@ -0,0 +1,90 @@
//! # MeshService — Generic Decentralized Service Layer
//!
//! A protocol and runtime for building decentralized services on mesh networks.
//! Any service following the Announce → Query → Response → Reserve pattern
//! can be implemented on this layer.
//!
//! ## Architecture
//!
//! ```text
//! ┌─────────────────────────────────────────────────────────────┐
//! │ Application Services │
//! │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
//! │ │ FAPP │ │ Housing │ │ Repair │ │ Custom │ ... │
//! │ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
//! │ └────────────┴────────────┴────────────┘ │
//! │ Service Layer (this crate) │
//! │ ServiceMessage, ServiceRouter, Verification │
//! │ ─────────────────────────────────────────────────────── │
//! │ Mesh Layer │
//! │ (provided by quicprochat-p2p or other mesh impl) │
//! └─────────────────────────────────────────────────────────────┘
//! ```
//!
//! ## Quick Start
//!
//! ```rust,ignore
//! use meshservice::{ServiceRouter, ServiceMessage, services::fapp::FappService};
//!
//! // Create router
//! let mut router = ServiceRouter::new(identity, capabilities);
//!
//! // Register services
//! router.register(FappService::new());
//! router.register(HousingService::new());
//!
//! // Handle incoming message
//! let action = router.handle(&incoming_bytes);
//! ```
pub mod identity;
pub mod message;
pub mod router;
pub mod store;
pub mod verification;
pub mod services;
pub mod wire;
pub mod error;
pub mod anti_abuse;
pub mod crypto;
pub use identity::ServiceIdentity;
pub use message::{ServiceMessage, MessageType};
pub use router::{ServiceRouter, ServiceHandler, ServiceAction};
pub use store::ServiceStore;
pub use verification::{Verification, VerificationLevel};
pub use error::ServiceError;
pub use anti_abuse::{RateLimiter, RateLimits, ProofOfWork, SenderReputation, TherapistPolicy};
pub use crypto::{EncryptionKeyPair, is_encrypted_payload, encryption_overhead};
/// Well-known service IDs.
pub mod service_ids {
/// Free Appointment Propagation Protocol (psychotherapy).
pub const FAPP: u32 = 0x0001;
/// Housing / room sharing.
pub const HOUSING: u32 = 0x0002;
/// Repair services / craftsmen.
pub const REPAIR: u32 = 0x0003;
/// Tutoring / education.
pub const TUTOR: u32 = 0x0004;
/// Medical appointments.
pub const MEDICAL: u32 = 0x0005;
/// Legal consultation.
pub const LEGAL: u32 = 0x0006;
/// Volunteer coordination.
pub const VOLUNTEER: u32 = 0x0007;
/// Events / tickets.
pub const EVENTS: u32 = 0x0008;
/// Reserved for user-defined services.
pub const CUSTOM_START: u32 = 0x8000;
}
/// Capability flags for service participation.
pub mod capabilities {
/// Node can announce/provide services.
pub const PROVIDER: u16 = 0x0100;
/// Node caches and relays service messages.
pub const RELAY: u16 = 0x0200;
/// Node can query/consume services.
pub const CONSUMER: u16 = 0x0400;
}

View File

@@ -0,0 +1,321 @@
//! Core message types for the service layer.
use std::time::{SystemTime, UNIX_EPOCH};
use serde::{Deserialize, Serialize};
use crate::identity::ServiceIdentity;
use crate::verification::Verification;
/// Message types within a service.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum MessageType {
/// Provider announces availability.
Announce = 0x01,
/// Consumer queries for matches.
Query = 0x02,
/// Response to a query.
Response = 0x03,
/// Consumer reserves a slot/item.
Reserve = 0x04,
/// Provider confirms/rejects reservation.
Confirm = 0x05,
/// Either party cancels.
Cancel = 0x06,
/// Provider updates an existing announce (partial).
Update = 0x07,
/// Provider revokes an announce.
Revoke = 0x08,
}
impl TryFrom<u8> for MessageType {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(MessageType::Announce),
0x02 => Ok(MessageType::Query),
0x03 => Ok(MessageType::Response),
0x04 => Ok(MessageType::Reserve),
0x05 => Ok(MessageType::Confirm),
0x06 => Ok(MessageType::Cancel),
0x07 => Ok(MessageType::Update),
0x08 => Ok(MessageType::Revoke),
_ => Err(()),
}
}
}
/// A generic service message that can carry any application payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServiceMessage {
/// Service identifier (which application).
pub service_id: u32,
/// Message type within service.
pub message_type: MessageType,
/// Protocol version for forward compatibility.
pub version: u8,
/// Unique message ID.
pub id: [u8; 16],
/// Sender's mesh address.
pub sender_address: [u8; 16],
/// Application-specific CBOR payload.
pub payload: Vec<u8>,
/// Ed25519 signature over signable fields.
pub signature: Vec<u8>,
/// Optional verifications from trusted parties.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub verifications: Vec<Verification>,
/// Monotonically increasing per sender (dedup/supersede).
pub sequence: u64,
/// Time-to-live in hours.
pub ttl_hours: u16,
/// Unix timestamp of creation.
pub timestamp: u64,
/// Current hop count (incremented on re-broadcast).
pub hop_count: u8,
/// Maximum propagation hops.
pub max_hops: u8,
}
/// Default TTL: 7 days.
const DEFAULT_TTL_HOURS: u16 = 168;
/// Default max hops.
const DEFAULT_MAX_HOPS: u8 = 8;
impl ServiceMessage {
/// Create a new service message.
pub fn new(
identity: &ServiceIdentity,
service_id: u32,
message_type: MessageType,
payload: Vec<u8>,
sequence: u64,
) -> Self {
Self::with_options(
identity,
service_id,
message_type,
payload,
sequence,
DEFAULT_TTL_HOURS,
DEFAULT_MAX_HOPS,
)
}
/// Create with custom TTL and max hops.
pub fn with_options(
identity: &ServiceIdentity,
service_id: u32,
message_type: MessageType,
payload: Vec<u8>,
sequence: u64,
ttl_hours: u16,
max_hops: u8,
) -> Self {
use sha2::{Digest, Sha256};
let sender_address = identity.address();
// Generate unique ID from address + sequence
let id_hash = Sha256::digest(
[&sender_address[..], &sequence.to_le_bytes()].concat()
);
let mut id = [0u8; 16];
id.copy_from_slice(&id_hash[..16]);
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let mut msg = Self {
service_id,
message_type,
version: 1,
id,
sender_address,
payload,
signature: Vec::new(),
verifications: Vec::new(),
sequence,
ttl_hours,
timestamp,
hop_count: 0,
max_hops,
};
let signable = msg.signable_bytes();
msg.signature = identity.sign(&signable).to_vec();
msg
}
/// Create an announce message.
pub fn announce(
identity: &ServiceIdentity,
service_id: u32,
payload: Vec<u8>,
sequence: u64,
) -> Self {
Self::new(identity, service_id, MessageType::Announce, payload, sequence)
}
/// Create a query message.
pub fn query(
identity: &ServiceIdentity,
service_id: u32,
payload: Vec<u8>,
) -> Self {
// Queries use random sequence (not monotonic)
let sequence = rand::random();
Self::with_options(
identity,
service_id,
MessageType::Query,
payload,
sequence,
1, // 1 hour TTL for queries
DEFAULT_MAX_HOPS,
)
}
/// Create a response message.
pub fn response(
identity: &ServiceIdentity,
service_id: u32,
query_id: [u8; 16],
payload: Vec<u8>,
) -> Self {
let mut msg = Self::new(
identity,
service_id,
MessageType::Response,
payload,
rand::random(),
);
// Response ID matches query ID for correlation
msg.id = query_id;
msg
}
/// Assemble bytes for signing/verification.
/// Excludes signature, hop_count, verifications (mutable fields).
fn signable_bytes(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(256);
buf.extend_from_slice(&self.service_id.to_le_bytes());
buf.push(self.message_type as u8);
buf.push(self.version);
buf.extend_from_slice(&self.id);
buf.extend_from_slice(&self.sender_address);
buf.extend_from_slice(&(self.payload.len() as u32).to_le_bytes());
buf.extend_from_slice(&self.payload);
buf.extend_from_slice(&self.sequence.to_le_bytes());
buf.extend_from_slice(&self.ttl_hours.to_le_bytes());
buf.extend_from_slice(&self.timestamp.to_le_bytes());
buf.push(self.max_hops);
buf
}
/// Verify the signature using the sender's public key.
pub fn verify(&self, sender_public_key: &[u8; 32]) -> bool {
use crate::identity::compute_address;
// Verify address matches key
if compute_address(sender_public_key) != self.sender_address {
return false;
}
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = self.signable_bytes();
ServiceIdentity::verify(sender_public_key, &signable, &sig)
}
/// Check if the message has expired.
pub fn is_expired(&self) -> bool {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let ttl_secs = u64::from(self.ttl_hours) * 3600;
now.saturating_sub(self.timestamp) > ttl_secs
}
/// Check if the message can still propagate.
pub fn can_propagate(&self) -> bool {
self.hop_count < self.max_hops && !self.is_expired()
}
/// Create a forwarded copy with incremented hop count.
pub fn forwarded(&self) -> Self {
let mut copy = self.clone();
copy.hop_count = copy.hop_count.saturating_add(1);
copy
}
/// Get the highest verification level attached.
pub fn verification_level(&self) -> u8 {
self.verifications
.iter()
.map(|v| v.level)
.max()
.unwrap_or(0)
}
/// Add a verification to the message.
pub fn add_verification(&mut self, verification: Verification) {
self.verifications.push(verification);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn create_and_verify() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(
&id,
crate::service_ids::FAPP,
b"test payload".to_vec(),
1,
);
assert!(msg.verify(&id.public_key()));
assert!(!msg.is_expired());
assert!(msg.can_propagate());
assert_eq!(msg.hop_count, 0);
}
#[test]
fn forwarded_increments_hop() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, 1, vec![], 1);
let fwd = msg.forwarded();
assert_eq!(fwd.hop_count, 1);
assert!(fwd.verify(&id.public_key())); // Still valid
}
#[test]
fn tampered_fails_verify() {
let id = ServiceIdentity::generate();
let mut msg = ServiceMessage::announce(&id, 1, b"original".to_vec(), 1);
msg.payload = b"tampered".to_vec();
assert!(!msg.verify(&id.public_key()));
}
#[test]
fn query_has_short_ttl() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::query(&id, 1, vec![]);
assert_eq!(msg.ttl_hours, 1);
}
}

View File

@@ -0,0 +1,289 @@
//! Service router dispatches messages to service-specific handlers.
use std::collections::HashMap;
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::store::{ServiceStore, StoredMessage};
use crate::verification::TrustedVerifiers;
/// Action returned by a service handler.
#[derive(Debug)]
pub enum ServiceAction {
/// Message handled, do nothing more.
Handled,
/// Store the message locally.
Store,
/// Store and forward to peers.
StoreAndForward,
/// Forward without storing (pass-through relay).
ForwardOnly,
/// Drop the message silently.
Drop,
/// Send a response back.
Respond(ServiceMessage),
/// Reject with error.
Reject(ServiceError),
}
/// Trait for service-specific handlers.
pub trait ServiceHandler: Send + Sync {
/// The service ID this handler manages.
fn service_id(&self) -> u32;
/// Human-readable service name.
fn name(&self) -> &str;
/// Handle an incoming message.
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError>;
/// Validate a message payload (service-specific logic).
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
// Default: accept all
let _ = message;
Ok(())
}
/// Check if a message matches a query.
fn matches_query(&self, announce: &StoredMessage, query: &ServiceMessage) -> bool;
}
/// Context passed to handlers.
pub struct HandlerContext<'a> {
/// Current node's capabilities.
pub capabilities: u16,
/// The store (for lookups during handle).
pub store: &'a ServiceStore,
/// Trusted verifiers for checking.
pub trusted_verifiers: &'a TrustedVerifiers,
/// Sender's public key (if known).
pub sender_public_key: Option<[u8; 32]>,
}
/// Routes messages to appropriate service handlers.
pub struct ServiceRouter {
/// Service ID -> Handler.
handlers: HashMap<u32, Box<dyn ServiceHandler>>,
/// Shared message store.
store: ServiceStore,
/// Node capabilities.
capabilities: u16,
/// Trusted verifiers.
trusted_verifiers: TrustedVerifiers,
/// Minimum verification level to accept announces (0 = any).
min_verification_level: u8,
}
impl ServiceRouter {
/// Create a new router.
pub fn new(capabilities: u16) -> Self {
Self {
handlers: HashMap::new(),
store: ServiceStore::new(),
capabilities,
trusted_verifiers: TrustedVerifiers::new(),
min_verification_level: 0,
}
}
/// Register a service handler.
pub fn register(&mut self, handler: Box<dyn ServiceHandler>) {
let id = handler.service_id();
self.handlers.insert(id, handler);
}
/// Set trusted verifiers.
pub fn set_trusted_verifiers(&mut self, verifiers: TrustedVerifiers) {
self.trusted_verifiers = verifiers;
}
/// Set minimum verification level for announces.
pub fn set_min_verification_level(&mut self, level: u8) {
self.min_verification_level = level;
}
/// Access the store.
pub fn store(&self) -> &ServiceStore {
&self.store
}
/// Mutable access to store.
pub fn store_mut(&mut self) -> &mut ServiceStore {
&mut self.store
}
/// Check if a service is registered.
pub fn has_service(&self, service_id: u32) -> bool {
self.handlers.contains_key(&service_id)
}
/// Handle an incoming message.
pub fn handle(
&mut self,
message: ServiceMessage,
sender_public_key: Option<[u8; 32]>,
) -> Result<ServiceAction, ServiceError> {
// Basic validation
if message.is_expired() {
return Err(ServiceError::Expired);
}
if message.hop_count > message.max_hops {
return Err(ServiceError::MaxHopsExceeded);
}
// Get handler
let handler = self
.handlers
.get(&message.service_id)
.ok_or(ServiceError::UnknownService(message.service_id))?;
// Validate message with handler
handler.validate(&message)?;
// Verify signature if we have public key
if let Some(pk) = &sender_public_key {
if !message.verify(pk) {
return Err(ServiceError::SignatureInvalid);
}
}
// Check verification level for announces
if message.message_type == MessageType::Announce && self.min_verification_level > 0 {
let level = self
.trusted_verifiers
.highest_level(&message.verifications, &message.sender_address);
if (level as u8) < self.min_verification_level {
return Err(ServiceError::VerificationRequired(self.min_verification_level));
}
}
// Build context
let context = HandlerContext {
capabilities: self.capabilities,
store: &self.store,
trusted_verifiers: &self.trusted_verifiers,
sender_public_key,
};
// Dispatch to handler
let action = handler.handle(&message, &context)?;
// Process action
match &action {
ServiceAction::Store | ServiceAction::StoreAndForward => {
if let Some(pk) = sender_public_key {
self.store.store(message, pk);
}
}
_ => {}
}
Ok(action)
}
/// Query the store for matching announces.
pub fn query(&self, query: &ServiceMessage) -> Vec<&StoredMessage> {
let Some(handler) = self.handlers.get(&query.service_id) else {
return Vec::new();
};
self.store.query(query.service_id, |stored| {
stored.message.message_type == MessageType::Announce
&& handler.matches_query(stored, query)
})
}
/// Get handler name for a service.
pub fn service_name(&self, service_id: u32) -> Option<&str> {
self.handlers.get(&service_id).map(|h| h.name())
}
/// List registered services.
pub fn services(&self) -> Vec<(u32, &str)> {
self.handlers
.iter()
.map(|(&id, h)| (id, h.name()))
.collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{identity::ServiceIdentity, service_ids::FAPP};
struct TestHandler;
impl ServiceHandler for TestHandler {
fn service_id(&self) -> u32 {
FAPP
}
fn name(&self) -> &str {
"Test"
}
fn handle(
&self,
message: &ServiceMessage,
_context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => Ok(ServiceAction::StoreAndForward),
MessageType::Query => Ok(ServiceAction::Handled),
_ => Ok(ServiceAction::Drop),
}
}
fn matches_query(&self, _announce: &StoredMessage, _query: &ServiceMessage) -> bool {
true // Match all for test
}
}
#[test]
fn register_and_handle() {
let mut router = ServiceRouter::new(crate::capabilities::RELAY);
router.register(Box::new(TestHandler));
assert!(router.has_service(FAPP));
assert_eq!(router.service_name(FAPP), Some("Test"));
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, FAPP, vec![], 1);
let action = router.handle(msg.clone(), Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
// Message should be stored
assert_eq!(router.store().len(), 1);
}
#[test]
fn unknown_service_rejected() {
let mut router = ServiceRouter::new(0);
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, 9999, vec![], 1);
let result = router.handle(msg, Some(id.public_key()));
assert!(matches!(result, Err(ServiceError::UnknownService(9999))));
}
#[test]
fn invalid_signature_rejected() {
let mut router = ServiceRouter::new(0);
router.register(Box::new(TestHandler));
let id1 = ServiceIdentity::generate();
let id2 = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id1, FAPP, vec![], 1);
// Pass wrong public key
let result = router.handle(msg, Some(id2.public_key()));
assert!(matches!(result, Err(ServiceError::SignatureInvalid)));
}
}

View File

@@ -0,0 +1,479 @@
//! FAPP — Free Appointment Propagation Protocol.
//!
//! Decentralized psychotherapy appointment discovery.
//!
//! ## Flow
//!
//! 1. Therapist announces available slots (specialism, location, modality).
//! 2. Announcement floods through mesh (TTL-limited, signature-verified).
//! 3. Patient queries for matching slots (specialism, distance).
//! 4. Relays respond with cached matches.
//! 5. Patient reserves slot (E2E encrypted to therapist).
//! 6. Therapist confirms/rejects.
use serde::{Deserialize, Serialize};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::router::{HandlerContext, ServiceAction, ServiceHandler};
use crate::service_ids::FAPP;
use crate::store::StoredMessage;
use crate::wire::{decode_payload, encode_payload};
/// Therapy specialisms.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum Specialism {
GeneralPsychotherapy = 0x01,
CognitiveBehavioral = 0x02,
Psychoanalysis = 0x03,
SystemicTherapy = 0x04,
TraumaFocused = 0x05,
ChildAndAdolescent = 0x06,
CoupleAndFamily = 0x07,
Addiction = 0x08,
Neuropsychology = 0x09,
}
impl TryFrom<u8> for Specialism {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(Self::GeneralPsychotherapy),
0x02 => Ok(Self::CognitiveBehavioral),
0x03 => Ok(Self::Psychoanalysis),
0x04 => Ok(Self::SystemicTherapy),
0x05 => Ok(Self::TraumaFocused),
0x06 => Ok(Self::ChildAndAdolescent),
0x07 => Ok(Self::CoupleAndFamily),
0x08 => Ok(Self::Addiction),
0x09 => Ok(Self::Neuropsychology),
_ => Err(()),
}
}
}
/// Therapy modality.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum Modality {
InPerson = 0x01,
VideoCall = 0x02,
PhoneCall = 0x03,
TextBased = 0x04,
}
/// Slot announcement payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlotAnnounce {
/// Therapist's specialisms (bitfield).
pub specialisms: u16,
/// Modality (bitfield).
pub modality: u8,
/// Postal code (first 3 digits for privacy).
pub postal_prefix: String,
/// Geohash (6 chars, ~1.2km precision).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub geohash: Option<String>,
/// Available slots count.
pub available_slots: u8,
/// Earliest available date (days from epoch).
pub earliest_days: u16,
/// Insurance types accepted (bitfield).
pub insurance: u8,
/// Optional profile URL for verification.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub profile_url: Option<String>,
/// Optional display name.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub display_name: Option<String>,
}
impl SlotAnnounce {
/// Create a new announcement.
pub fn new(specialisms: &[Specialism], modality: Modality, postal_prefix: &str) -> Self {
let spec_bits = specialisms.iter().fold(0u16, |acc, s| acc | (1 << (*s as u8)));
Self {
specialisms: spec_bits,
modality: modality as u8,
postal_prefix: postal_prefix.into(),
geohash: None,
available_slots: 1,
earliest_days: 0,
insurance: 0xFF, // All accepted by default
profile_url: None,
display_name: None,
}
}
/// Set geohash location.
pub fn with_geohash(mut self, geohash: &str) -> Self {
self.geohash = Some(geohash[..6.min(geohash.len())].into());
self
}
/// Set available slots count.
pub fn with_slots(mut self, count: u8) -> Self {
self.available_slots = count;
self
}
/// Set earliest availability.
pub fn with_earliest(mut self, days_from_now: u16) -> Self {
self.earliest_days = days_from_now;
self
}
/// Set profile URL.
pub fn with_profile(mut self, url: &str) -> Self {
self.profile_url = Some(url.into());
self
}
/// Set display name.
pub fn with_name(mut self, name: &str) -> Self {
self.display_name = Some(name.into());
self
}
/// Check if a specialism is offered.
pub fn has_specialism(&self, spec: Specialism) -> bool {
self.specialisms & (1 << (spec as u8)) != 0
}
/// Encode to CBOR bytes.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR bytes.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Insurance types.
pub mod insurance {
pub const PRIVATE: u8 = 0x01;
pub const PUBLIC: u8 = 0x02;
pub const SELF_PAY: u8 = 0x04;
}
/// Slot query payload.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlotQuery {
/// Desired specialisms (bitfield, any match).
pub specialisms: u16,
/// Postal prefix to search.
pub postal_prefix: String,
/// Max distance in km (optional).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub max_distance_km: Option<u8>,
/// Required modality (0 = any).
pub modality: u8,
/// Max wait in days.
pub max_wait_days: u16,
/// Insurance type required.
pub insurance: u8,
}
impl SlotQuery {
/// Create a query for a specialism in a postal area.
pub fn new(specialism: Specialism, postal_prefix: &str) -> Self {
Self {
specialisms: 1 << (specialism as u8),
postal_prefix: postal_prefix.into(),
max_distance_km: None,
modality: 0,
max_wait_days: 365,
insurance: 0xFF,
}
}
/// Require specific modality.
pub fn with_modality(mut self, modality: Modality) -> Self {
self.modality = modality as u8;
self
}
/// Set max wait time.
pub fn with_max_wait(mut self, days: u16) -> Self {
self.max_wait_days = days;
self
}
/// Check if an announce matches this query.
pub fn matches(&self, announce: &SlotAnnounce) -> bool {
// Specialism overlap
if announce.specialisms & self.specialisms == 0 {
return false;
}
// Postal prefix
if !announce.postal_prefix.starts_with(&self.postal_prefix)
&& !self.postal_prefix.starts_with(&announce.postal_prefix)
{
return false;
}
// Modality
if self.modality != 0 && announce.modality & self.modality == 0 {
return false;
}
// Wait time
if announce.earliest_days > self.max_wait_days {
return false;
}
// Insurance
if announce.insurance & self.insurance == 0 {
return false;
}
// Available slots
announce.available_slots > 0
}
/// Encode to CBOR bytes.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR bytes.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// FAPP service handler.
pub struct FappService {
/// Whether this node is a therapist (can announce).
pub is_provider: bool,
/// Whether this node relays FAPP messages.
pub is_relay: bool,
}
impl FappService {
/// Create a new FAPP handler.
pub fn new(is_provider: bool, is_relay: bool) -> Self {
Self {
is_provider,
is_relay,
}
}
/// Create a relay-only handler.
pub fn relay() -> Self {
Self::new(false, true)
}
/// Create a provider handler.
pub fn provider() -> Self {
Self::new(true, true)
}
}
impl ServiceHandler for FappService {
fn service_id(&self) -> u32 {
FAPP
}
fn name(&self) -> &str {
"FAPP"
}
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => {
// Validate payload
let _announce = SlotAnnounce::from_bytes(&message.payload)?;
// Store and forward if we're a relay
if self.is_relay {
Ok(ServiceAction::StoreAndForward)
} else {
Ok(ServiceAction::Store)
}
}
MessageType::Query => {
// Parse query
let query = SlotQuery::from_bytes(&message.payload)?;
// Find matches in store
let matches: Vec<_> = context
.store
.by_service(FAPP)
.into_iter()
.filter(|stored| {
if stored.message.message_type != MessageType::Announce {
return false;
}
if let Ok(announce) = SlotAnnounce::from_bytes(&stored.message.payload) {
query.matches(&announce)
} else {
false
}
})
.collect();
// If we have matches, we could respond (simplified for now)
if !matches.is_empty() {
// In a real impl, we'd aggregate and send response
Ok(ServiceAction::Handled)
} else if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Reserve | MessageType::Confirm | MessageType::Cancel => {
// E2E encrypted, just forward
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Revoke => {
// Remove from store
Ok(ServiceAction::Handled)
}
_ => Ok(ServiceAction::Drop),
}
}
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
match message.message_type {
MessageType::Announce => {
SlotAnnounce::from_bytes(&message.payload)?;
}
MessageType::Query => {
SlotQuery::from_bytes(&message.payload)?;
}
_ => {}
}
Ok(())
}
fn matches_query(&self, announce: &StoredMessage, query_msg: &ServiceMessage) -> bool {
let Ok(announce_data) = SlotAnnounce::from_bytes(&announce.message.payload) else {
return false;
};
let Ok(query) = SlotQuery::from_bytes(&query_msg.payload) else {
return false;
};
query.matches(&announce_data)
}
}
/// Helper to create a FAPP announce message.
pub fn create_announce(
identity: &crate::ServiceIdentity,
announce: &SlotAnnounce,
sequence: u64,
) -> Result<ServiceMessage, ServiceError> {
let payload = announce.to_bytes()?;
Ok(ServiceMessage::announce(identity, FAPP, payload, sequence))
}
/// Helper to create a FAPP query message.
pub fn create_query(
identity: &crate::ServiceIdentity,
query: &SlotQuery,
) -> Result<ServiceMessage, ServiceError> {
let payload = query.to_bytes()?;
Ok(ServiceMessage::query(identity, FAPP, payload))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn slot_announce_roundtrip() {
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral, Specialism::TraumaFocused],
Modality::VideoCall,
"104",
)
.with_slots(3)
.with_profile("https://therapists.de/dr-mueller");
let bytes = announce.to_bytes().unwrap();
let decoded = SlotAnnounce::from_bytes(&bytes).unwrap();
assert!(decoded.has_specialism(Specialism::CognitiveBehavioral));
assert!(decoded.has_specialism(Specialism::TraumaFocused));
assert!(!decoded.has_specialism(Specialism::Addiction));
assert_eq!(decoded.available_slots, 3);
assert_eq!(
decoded.profile_url,
Some("https://therapists.de/dr-mueller".into())
);
}
#[test]
fn query_matches_announce() {
let announce = SlotAnnounce::new(
&[Specialism::CognitiveBehavioral],
Modality::InPerson,
"104",
)
.with_slots(2);
let matching_query = SlotQuery::new(Specialism::CognitiveBehavioral, "104");
assert!(matching_query.matches(&announce));
let wrong_spec = SlotQuery::new(Specialism::Addiction, "104");
assert!(!wrong_spec.matches(&announce));
let wrong_location = SlotQuery::new(Specialism::CognitiveBehavioral, "200");
assert!(!wrong_location.matches(&announce));
}
#[test]
fn create_message_helpers() {
let id = ServiceIdentity::generate();
let announce = SlotAnnounce::new(&[Specialism::GeneralPsychotherapy], Modality::VideoCall, "10");
let msg = create_announce(&id, &announce, 1).unwrap();
assert_eq!(msg.service_id, FAPP);
assert_eq!(msg.message_type, MessageType::Announce);
let query = SlotQuery::new(Specialism::GeneralPsychotherapy, "10");
let msg = create_query(&id, &query).unwrap();
assert_eq!(msg.service_id, FAPP);
assert_eq!(msg.message_type, MessageType::Query);
}
#[test]
fn fapp_handler_processes_announce() {
use crate::router::ServiceRouter;
use crate::capabilities;
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(FappService::relay()));
let id = ServiceIdentity::generate();
let announce = SlotAnnounce::new(&[Specialism::TraumaFocused], Modality::InPerson, "100");
let msg = create_announce(&id, &announce, 1).unwrap();
let action = router.handle(msg.clone(), Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
// Should be stored
assert_eq!(router.store().service_count(FAPP), 1);
}
}

View File

@@ -0,0 +1,489 @@
//! Housing Service — Decentralized room/apartment sharing.
//!
//! Demonstrates how a second service can be built on the mesh layer.
//!
//! ## Flow
//!
//! 1. Landlord announces available room (type, size, price, location).
//! 2. Announcement floods through mesh.
//! 3. Seeker queries for matching listings.
//! 4. Relays respond with cached matches.
//! 5. Seeker reserves viewing slot (E2E encrypted).
//! 6. Landlord confirms/rejects.
use serde::{Deserialize, Serialize};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
use crate::router::{HandlerContext, ServiceAction, ServiceHandler};
use crate::service_ids::HOUSING;
use crate::store::StoredMessage;
use crate::wire::{decode_payload, encode_payload};
/// Listing type.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum ListingType {
Room = 0x01,
SharedFlat = 0x02,
Apartment = 0x03,
House = 0x04,
Studio = 0x05,
Sublet = 0x06,
}
impl TryFrom<u8> for ListingType {
type Error = ();
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(Self::Room),
0x02 => Ok(Self::SharedFlat),
0x03 => Ok(Self::Apartment),
0x04 => Ok(Self::House),
0x05 => Ok(Self::Studio),
0x06 => Ok(Self::Sublet),
_ => Err(()),
}
}
}
/// Amenities bitfield.
pub mod amenities {
pub const FURNISHED: u16 = 0x0001;
pub const BALCONY: u16 = 0x0002;
pub const PARKING: u16 = 0x0004;
pub const PETS_ALLOWED: u16 = 0x0008;
pub const WASHING_MACHINE: u16 = 0x0010;
pub const DISHWASHER: u16 = 0x0020;
pub const ELEVATOR: u16 = 0x0040;
pub const GARDEN: u16 = 0x0080;
pub const INTERNET: u16 = 0x0100;
pub const HEATING_INCLUDED: u16 = 0x0200;
}
/// Room/listing announcement.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ListingAnnounce {
/// Type of listing.
pub listing_type: u8,
/// Size in square meters.
pub size_sqm: u16,
/// Monthly rent in cents (EUR).
pub rent_cents: u32,
/// Postal prefix (3 digits).
pub postal_prefix: String,
/// Geohash for location (6 chars).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub geohash: Option<String>,
/// Number of rooms (0 for studio).
pub rooms: u8,
/// Available from (days from epoch).
pub available_from_days: u16,
/// Minimum rental period in months (0 = unlimited).
pub min_months: u8,
/// Maximum rental period in months (0 = unlimited).
pub max_months: u8,
/// Amenities bitfield.
pub amenities: u16,
/// Optional title.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
/// Optional external listing URL.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub listing_url: Option<String>,
}
impl ListingAnnounce {
/// Create a new listing.
pub fn new(listing_type: ListingType, size_sqm: u16, rent_euros: u32, postal_prefix: &str) -> Self {
Self {
listing_type: listing_type as u8,
size_sqm,
rent_cents: rent_euros * 100,
postal_prefix: postal_prefix.into(),
geohash: None,
rooms: 1,
available_from_days: 0,
min_months: 0,
max_months: 0,
amenities: 0,
title: None,
listing_url: None,
}
}
/// Set rooms count.
pub fn with_rooms(mut self, rooms: u8) -> Self {
self.rooms = rooms;
self
}
/// Set geohash.
pub fn with_geohash(mut self, geohash: &str) -> Self {
self.geohash = Some(geohash[..6.min(geohash.len())].into());
self
}
/// Set amenities.
pub fn with_amenities(mut self, amenities: u16) -> Self {
self.amenities = amenities;
self
}
/// Set title.
pub fn with_title(mut self, title: &str) -> Self {
self.title = Some(title.into());
self
}
/// Set minimum/maximum rental period.
pub fn with_term(mut self, min_months: u8, max_months: u8) -> Self {
self.min_months = min_months;
self.max_months = max_months;
self
}
/// Check if has amenity.
pub fn has_amenity(&self, amenity: u16) -> bool {
self.amenities & amenity != 0
}
/// Get rent in euros.
pub fn rent_euros(&self) -> u32 {
self.rent_cents / 100
}
/// Encode to CBOR.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Housing query.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ListingQuery {
/// Desired listing types (bitfield).
pub listing_types: u8,
/// Postal prefix.
pub postal_prefix: String,
/// Min size in sqm.
pub min_size_sqm: u16,
/// Max rent in cents.
pub max_rent_cents: u32,
/// Min rooms.
pub min_rooms: u8,
/// Required amenities (all must match).
pub required_amenities: u16,
/// Max move-in days.
pub max_move_in_days: u16,
}
impl ListingQuery {
/// Create a simple query.
pub fn new(postal_prefix: &str, max_rent_euros: u32) -> Self {
Self {
listing_types: 0xFF, // Any type
postal_prefix: postal_prefix.into(),
min_size_sqm: 0,
max_rent_cents: max_rent_euros * 100,
min_rooms: 0,
required_amenities: 0,
max_move_in_days: 365,
}
}
/// Filter by type.
pub fn with_type(mut self, listing_type: ListingType) -> Self {
self.listing_types = 1 << (listing_type as u8);
self
}
/// Require minimum size.
pub fn with_min_size(mut self, sqm: u16) -> Self {
self.min_size_sqm = sqm;
self
}
/// Require minimum rooms.
pub fn with_min_rooms(mut self, rooms: u8) -> Self {
self.min_rooms = rooms;
self
}
/// Require amenities.
pub fn with_amenities(mut self, amenities: u16) -> Self {
self.required_amenities = amenities;
self
}
/// Check if listing matches.
pub fn matches(&self, listing: &ListingAnnounce) -> bool {
// Type match
if self.listing_types != 0xFF && (self.listing_types & (1 << listing.listing_type) == 0) {
return false;
}
// Location
if !listing.postal_prefix.starts_with(&self.postal_prefix)
&& !self.postal_prefix.starts_with(&listing.postal_prefix)
{
return false;
}
// Size
if listing.size_sqm < self.min_size_sqm {
return false;
}
// Rent
if listing.rent_cents > self.max_rent_cents {
return false;
}
// Rooms
if listing.rooms < self.min_rooms {
return false;
}
// Amenities (all required must be present)
if listing.amenities & self.required_amenities != self.required_amenities {
return false;
}
// Availability
listing.available_from_days <= self.max_move_in_days
}
/// Encode to CBOR.
pub fn to_bytes(&self) -> Result<Vec<u8>, ServiceError> {
encode_payload(self)
}
/// Decode from CBOR.
pub fn from_bytes(data: &[u8]) -> Result<Self, ServiceError> {
decode_payload(data)
}
}
/// Housing service handler.
pub struct HousingService {
pub is_provider: bool,
pub is_relay: bool,
}
impl HousingService {
/// Create a new handler.
pub fn new(is_provider: bool, is_relay: bool) -> Self {
Self {
is_provider,
is_relay,
}
}
/// Create a relay-only handler.
pub fn relay() -> Self {
Self::new(false, true)
}
/// Create a provider handler.
pub fn provider() -> Self {
Self::new(true, true)
}
}
impl ServiceHandler for HousingService {
fn service_id(&self) -> u32 {
HOUSING
}
fn name(&self) -> &str {
"Housing"
}
fn handle(
&self,
message: &ServiceMessage,
context: &HandlerContext,
) -> Result<ServiceAction, ServiceError> {
match message.message_type {
MessageType::Announce => {
let _listing = ListingAnnounce::from_bytes(&message.payload)?;
if self.is_relay {
Ok(ServiceAction::StoreAndForward)
} else {
Ok(ServiceAction::Store)
}
}
MessageType::Query => {
let query = ListingQuery::from_bytes(&message.payload)?;
let _matches: Vec<_> = context
.store
.by_service(HOUSING)
.into_iter()
.filter(|stored| {
if stored.message.message_type != MessageType::Announce {
return false;
}
if let Ok(listing) = ListingAnnounce::from_bytes(&stored.message.payload) {
query.matches(&listing)
} else {
false
}
})
.collect();
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Reserve | MessageType::Confirm | MessageType::Cancel => {
if self.is_relay {
Ok(ServiceAction::ForwardOnly)
} else {
Ok(ServiceAction::Handled)
}
}
MessageType::Revoke => Ok(ServiceAction::Handled),
_ => Ok(ServiceAction::Drop),
}
}
fn validate(&self, message: &ServiceMessage) -> Result<(), ServiceError> {
match message.message_type {
MessageType::Announce => {
ListingAnnounce::from_bytes(&message.payload)?;
}
MessageType::Query => {
ListingQuery::from_bytes(&message.payload)?;
}
_ => {}
}
Ok(())
}
fn matches_query(&self, listing: &StoredMessage, query_msg: &ServiceMessage) -> bool {
let Ok(listing_data) = ListingAnnounce::from_bytes(&listing.message.payload) else {
return false;
};
let Ok(query) = ListingQuery::from_bytes(&query_msg.payload) else {
return false;
};
query.matches(&listing_data)
}
}
/// Helper to create a housing announce.
pub fn create_announce(
identity: &crate::ServiceIdentity,
listing: &ListingAnnounce,
sequence: u64,
) -> Result<ServiceMessage, ServiceError> {
let payload = listing.to_bytes()?;
Ok(ServiceMessage::announce(identity, HOUSING, payload, sequence))
}
/// Helper to create a housing query.
pub fn create_query(
identity: &crate::ServiceIdentity,
query: &ListingQuery,
) -> Result<ServiceMessage, ServiceError> {
let payload = query.to_bytes()?;
Ok(ServiceMessage::query(identity, HOUSING, payload))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
#[test]
fn listing_roundtrip() {
let listing = ListingAnnounce::new(ListingType::Apartment, 65, 850, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED | amenities::BALCONY)
.with_title("Cozy 2-room in Kreuzberg");
let bytes = listing.to_bytes().unwrap();
let decoded = ListingAnnounce::from_bytes(&bytes).unwrap();
assert_eq!(decoded.size_sqm, 65);
assert_eq!(decoded.rent_euros(), 850);
assert_eq!(decoded.rooms, 2);
assert!(decoded.has_amenity(amenities::FURNISHED));
assert!(decoded.has_amenity(amenities::BALCONY));
assert!(!decoded.has_amenity(amenities::PARKING));
}
#[test]
fn query_matches() {
let listing = ListingAnnounce::new(ListingType::Apartment, 50, 700, "104")
.with_rooms(2)
.with_amenities(amenities::FURNISHED);
// Basic match
let query = ListingQuery::new("104", 800);
assert!(query.matches(&listing));
// Too expensive for query
let cheap_query = ListingQuery::new("104", 500);
assert!(!cheap_query.matches(&listing));
// Wrong location
let wrong_loc = ListingQuery::new("200", 800);
assert!(!wrong_loc.matches(&listing));
// Size requirement
let big_query = ListingQuery::new("104", 800).with_min_size(60);
assert!(!big_query.matches(&listing));
// Amenity requirement
let needs_parking = ListingQuery::new("104", 800).with_amenities(amenities::PARKING);
assert!(!needs_parking.matches(&listing));
}
#[test]
fn create_message_helpers() {
let id = ServiceIdentity::generate();
let listing = ListingAnnounce::new(ListingType::Room, 20, 400, "100");
let msg = create_announce(&id, &listing, 1).unwrap();
assert_eq!(msg.service_id, HOUSING);
assert_eq!(msg.message_type, MessageType::Announce);
let query = ListingQuery::new("100", 500);
let msg = create_query(&id, &query).unwrap();
assert_eq!(msg.service_id, HOUSING);
assert_eq!(msg.message_type, MessageType::Query);
}
#[test]
fn housing_handler_processes_listing() {
use crate::capabilities;
use crate::router::ServiceRouter;
let mut router = ServiceRouter::new(capabilities::RELAY);
router.register(Box::new(HousingService::relay()));
let id = ServiceIdentity::generate();
let listing = ListingAnnounce::new(ListingType::SharedFlat, 15, 350, "100");
let msg = create_announce(&id, &listing, 1).unwrap();
let action = router.handle(msg, Some(id.public_key())).unwrap();
assert!(matches!(action, ServiceAction::StoreAndForward));
assert_eq!(router.store().service_count(HOUSING), 1);
}
}

View File

@@ -0,0 +1,4 @@
//! Built-in service implementations.
pub mod fapp;
pub mod housing;

View File

@@ -0,0 +1,406 @@
//! In-memory message store with eviction policies.
use std::collections::HashMap;
use std::time::{SystemTime, UNIX_EPOCH};
use crate::message::ServiceMessage;
/// Configuration for the message store.
#[derive(Debug, Clone)]
pub struct StoreConfig {
/// Maximum messages per service.
pub max_per_service: usize,
/// Maximum messages per sender (per service).
pub max_per_sender: usize,
/// Maximum total messages.
pub max_total: usize,
/// Prune interval in seconds.
pub prune_interval_secs: u64,
}
impl Default for StoreConfig {
fn default() -> Self {
Self {
max_per_service: 10_000,
max_per_sender: 100,
max_total: 50_000,
prune_interval_secs: 300,
}
}
}
/// A stored message with metadata.
#[derive(Debug, Clone)]
pub struct StoredMessage {
pub message: ServiceMessage,
/// Sender's public key (needed for verification).
pub sender_public_key: [u8; 32],
/// When we stored this message.
pub stored_at: u64,
}
/// Generic service message store.
///
/// Organized by service_id, then by sender_address, then by message_id.
pub struct ServiceStore {
config: StoreConfig,
/// service_id -> sender_address -> message_id -> StoredMessage
messages: HashMap<u32, HashMap<[u8; 16], HashMap<[u8; 16], StoredMessage>>>,
/// Total message count.
total_count: usize,
/// Last prune timestamp.
last_prune: u64,
}
impl ServiceStore {
/// Create a new store with default config.
pub fn new() -> Self {
Self::with_config(StoreConfig::default())
}
/// Create with custom config.
pub fn with_config(config: StoreConfig) -> Self {
Self {
config,
messages: HashMap::new(),
total_count: 0,
last_prune: 0,
}
}
/// Store a message, returning true if it was new.
pub fn store(&mut self, message: ServiceMessage, sender_public_key: [u8; 32]) -> bool {
// Prune if interval passed
self.maybe_prune();
let service_id = message.service_id;
let sender_address = message.sender_address;
let message_id = message.id;
// Check per-service limit and evict if needed
{
let service_count: usize = self.messages
.get(&service_id)
.map(|s| s.values().map(|m| m.len()).sum())
.unwrap_or(0);
if service_count >= self.config.max_per_service {
self.evict_oldest_in_service(service_id);
}
}
// Check per-sender limit and evict if needed
{
let sender_count = self.messages
.get(&service_id)
.and_then(|s| s.get(&sender_address))
.map(|m| m.len())
.unwrap_or(0);
if sender_count >= self.config.max_per_sender {
self.evict_oldest_from_sender(service_id, sender_address);
}
}
// Get or create maps
let service_map = self.messages.entry(service_id).or_default();
let sender_map = service_map.entry(sender_address).or_default();
// Check for existing message
let is_new_or_update = if let Some(existing) = sender_map.get(&message_id) {
// Existing: only update if higher sequence
if message.sequence <= existing.message.sequence {
return false;
}
// This is an update, not a new message
false
} else {
// New message
true
};
let stored_at = now();
sender_map.insert(
message_id,
StoredMessage {
message,
sender_public_key,
stored_at,
},
);
if is_new_or_update {
self.total_count += 1;
}
// Return true for both new messages and updates
true
}
/// Get a message by service, sender, and ID.
pub fn get(
&self,
service_id: u32,
sender_address: &[u8; 16],
message_id: &[u8; 16],
) -> Option<&StoredMessage> {
self.messages
.get(&service_id)?
.get(sender_address)?
.get(message_id)
}
/// Get all messages from a sender in a service.
pub fn by_sender(&self, service_id: u32, sender_address: &[u8; 16]) -> Vec<&StoredMessage> {
self.messages
.get(&service_id)
.and_then(|s| s.get(sender_address))
.map(|m| m.values().collect())
.unwrap_or_default()
}
/// Get all messages in a service.
pub fn by_service(&self, service_id: u32) -> Vec<&StoredMessage> {
self.messages
.get(&service_id)
.map(|s| s.values().flat_map(|m| m.values()).collect())
.unwrap_or_default()
}
/// Query messages with a predicate.
pub fn query<F>(&self, service_id: u32, predicate: F) -> Vec<&StoredMessage>
where
F: Fn(&StoredMessage) -> bool,
{
self.by_service(service_id)
.into_iter()
.filter(|m| predicate(m))
.collect()
}
/// Remove a specific message.
pub fn remove(
&mut self,
service_id: u32,
sender_address: &[u8; 16],
message_id: &[u8; 16],
) -> Option<StoredMessage> {
let result = self
.messages
.get_mut(&service_id)?
.get_mut(sender_address)?
.remove(message_id);
if result.is_some() {
self.total_count = self.total_count.saturating_sub(1);
}
result
}
/// Remove all messages from a sender.
pub fn remove_sender(&mut self, service_id: u32, sender_address: &[u8; 16]) -> usize {
let count = self
.messages
.get_mut(&service_id)
.and_then(|s| s.remove(sender_address))
.map(|m| m.len())
.unwrap_or(0);
self.total_count = self.total_count.saturating_sub(count);
count
}
/// Prune expired messages.
pub fn prune_expired(&mut self) -> usize {
let now = now();
let mut removed = 0;
for service_map in self.messages.values_mut() {
for sender_map in service_map.values_mut() {
let expired: Vec<[u8; 16]> = sender_map
.iter()
.filter(|(_, m)| m.message.is_expired())
.map(|(id, _)| *id)
.collect();
for id in expired {
sender_map.remove(&id);
removed += 1;
}
}
}
self.total_count = self.total_count.saturating_sub(removed);
self.last_prune = now;
removed
}
/// Get total message count.
pub fn len(&self) -> usize {
self.total_count
}
/// Check if empty.
pub fn is_empty(&self) -> bool {
self.total_count == 0
}
/// Get count by service.
pub fn service_count(&self, service_id: u32) -> usize {
self.messages
.get(&service_id)
.map(|s| s.values().map(|m| m.len()).sum())
.unwrap_or(0)
}
/// Run prune if interval passed.
fn maybe_prune(&mut self) {
let now = now();
if now.saturating_sub(self.last_prune) >= self.config.prune_interval_secs {
self.prune_expired();
}
}
/// Evict oldest message in a service.
fn evict_oldest_in_service(&mut self, service_id: u32) {
let Some(service_map) = self.messages.get_mut(&service_id) else {
return;
};
let mut oldest: Option<([u8; 16], [u8; 16], u64)> = None;
for (sender, msgs) in service_map.iter() {
for (id, stored) in msgs.iter() {
match oldest {
Some((_, _, ts)) if stored.message.timestamp < ts => {
oldest = Some((*sender, *id, stored.message.timestamp));
}
None => {
oldest = Some((*sender, *id, stored.message.timestamp));
}
_ => {}
}
}
}
if let Some((sender, id, _)) = oldest {
if let Some(sender_map) = service_map.get_mut(&sender) {
sender_map.remove(&id);
self.total_count = self.total_count.saturating_sub(1);
}
}
}
/// Evict oldest message from a sender.
fn evict_oldest_from_sender(&mut self, service_id: u32, sender_address: [u8; 16]) {
let Some(sender_map) = self
.messages
.get_mut(&service_id)
.and_then(|s| s.get_mut(&sender_address))
else {
return;
};
let oldest = sender_map
.iter()
.min_by_key(|(_, m)| m.message.timestamp)
.map(|(id, _)| *id);
if let Some(id) = oldest {
sender_map.remove(&id);
self.total_count = self.total_count.saturating_sub(1);
}
}
}
impl Default for ServiceStore {
fn default() -> Self {
Self::new()
}
}
fn now() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{identity::ServiceIdentity, message::ServiceMessage, service_ids::FAPP};
fn make_message(id: &ServiceIdentity, seq: u64) -> ServiceMessage {
ServiceMessage::announce(id, FAPP, b"test".to_vec(), seq)
}
#[test]
fn store_and_retrieve() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg = make_message(&id, 1);
assert!(store.store(msg.clone(), id.public_key()));
assert_eq!(store.len(), 1);
let retrieved = store.get(FAPP, &id.address(), &msg.id);
assert!(retrieved.is_some());
}
#[test]
fn duplicate_rejected() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg = make_message(&id, 1);
assert!(store.store(msg.clone(), id.public_key()));
assert!(!store.store(msg.clone(), id.public_key())); // Duplicate
assert_eq!(store.len(), 1);
}
#[test]
fn higher_sequence_updates() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
let msg1 = make_message(&id, 1);
let mut msg2 = make_message(&id, 2);
msg2.id = msg1.id; // Same ID
store.store(msg1.clone(), id.public_key());
assert!(store.store(msg2.clone(), id.public_key())); // Updates
let retrieved = store.get(FAPP, &id.address(), &msg1.id).unwrap();
assert_eq!(retrieved.message.sequence, 2);
}
#[test]
fn query_by_sender() {
let mut store = ServiceStore::new();
let id1 = ServiceIdentity::generate();
let id2 = ServiceIdentity::generate();
store.store(make_message(&id1, 1), id1.public_key());
store.store(make_message(&id1, 2), id1.public_key());
store.store(make_message(&id2, 1), id2.public_key());
let sender1_msgs = store.by_sender(FAPP, &id1.address());
assert_eq!(sender1_msgs.len(), 2);
let sender2_msgs = store.by_sender(FAPP, &id2.address());
assert_eq!(sender2_msgs.len(), 1);
}
#[test]
fn remove_sender() {
let mut store = ServiceStore::new();
let id = ServiceIdentity::generate();
store.store(make_message(&id, 1), id.public_key());
store.store(make_message(&id, 2), id.public_key());
assert_eq!(store.len(), 2);
let removed = store.remove_sender(FAPP, &id.address());
assert_eq!(removed, 2);
assert_eq!(store.len(), 0);
}
}

View File

@@ -0,0 +1,290 @@
//! Verification framework for building trust in decentralized services.
//!
//! Verification levels:
//! - 0: None (bare announce)
//! - 1: Self-asserted (profile URL, metadata)
//! - 2: Endorsed by trusted peers
//! - 3: Registry-verified (KBV for therapists, trade registry for craftsmen)
use serde::{Deserialize, Serialize};
use crate::identity::ServiceIdentity;
/// Verification levels (higher = more trusted).
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default)]
#[repr(u8)]
pub enum VerificationLevel {
#[default]
None = 0,
SelfAsserted = 1,
PeerEndorsed = 2,
RegistryVerified = 3,
}
impl From<u8> for VerificationLevel {
fn from(value: u8) -> Self {
match value {
1 => VerificationLevel::SelfAsserted,
2 => VerificationLevel::PeerEndorsed,
3.. => VerificationLevel::RegistryVerified,
_ => VerificationLevel::None,
}
}
}
/// A verification attestation attached to a service message.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Verification {
/// Verification level.
pub level: u8,
/// Verifier's mesh address.
pub verifier_address: [u8; 16],
/// What is being verified (e.g., "license", "identity").
pub claim: String,
/// Optional external reference (URL, registry ID).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub reference: Option<String>,
/// Signature over (level || sender_address || claim).
pub signature: Vec<u8>,
/// Timestamp of verification.
pub timestamp: u64,
/// Optional expiry timestamp.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expires: Option<u64>,
}
impl Verification {
/// Create a new peer endorsement.
pub fn peer_endorsement(
verifier: &ServiceIdentity,
subject_address: &[u8; 16],
claim: impl Into<String>,
) -> Self {
Self::new(
verifier,
VerificationLevel::PeerEndorsed,
subject_address,
claim,
None,
)
}
/// Create a registry verification.
pub fn registry(
verifier: &ServiceIdentity,
subject_address: &[u8; 16],
claim: impl Into<String>,
reference: impl Into<String>,
) -> Self {
Self::new(
verifier,
VerificationLevel::RegistryVerified,
subject_address,
claim,
Some(reference.into()),
)
}
/// Create a new verification.
pub fn new(
verifier: &ServiceIdentity,
level: VerificationLevel,
subject_address: &[u8; 16],
claim: impl Into<String>,
reference: Option<String>,
) -> Self {
use std::time::{SystemTime, UNIX_EPOCH};
let claim = claim.into();
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let signable = Self::signable_bytes(level as u8, subject_address, &claim);
let signature = verifier.sign(&signable).to_vec();
Self {
level: level as u8,
verifier_address: verifier.address(),
claim,
reference,
signature,
timestamp,
expires: None,
}
}
/// Set expiry time.
pub fn with_expiry(mut self, expires: u64) -> Self {
self.expires = Some(expires);
self
}
/// Create signable bytes.
fn signable_bytes(level: u8, subject_address: &[u8; 16], claim: &str) -> Vec<u8> {
let mut buf = Vec::with_capacity(17 + claim.len());
buf.push(level);
buf.extend_from_slice(subject_address);
buf.extend_from_slice(claim.as_bytes());
buf
}
/// Verify this attestation.
pub fn verify(&self, verifier_public_key: &[u8; 32], subject_address: &[u8; 16]) -> bool {
use crate::identity::compute_address;
// Verify verifier address matches key
if compute_address(verifier_public_key) != self.verifier_address {
return false;
}
// Check expiry
if let Some(expires) = self.expires {
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
if now > expires {
return false;
}
}
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = Self::signable_bytes(self.level, subject_address, &self.claim);
ServiceIdentity::verify(verifier_public_key, &signable, &sig)
}
}
/// Set of known trusted verifiers (registries, endorsers).
#[derive(Default)]
pub struct TrustedVerifiers {
/// Known public keys with their trust level.
verifiers: Vec<TrustedVerifier>,
}
/// A trusted verifier entry.
#[derive(Clone)]
pub struct TrustedVerifier {
pub public_key: [u8; 32],
pub address: [u8; 16],
pub name: String,
pub max_level: VerificationLevel,
}
impl TrustedVerifiers {
/// Create empty set.
pub fn new() -> Self {
Self::default()
}
/// Add a trusted verifier.
pub fn add(
&mut self,
public_key: [u8; 32],
name: impl Into<String>,
max_level: VerificationLevel,
) {
use crate::identity::compute_address;
self.verifiers.push(TrustedVerifier {
public_key,
address: compute_address(&public_key),
name: name.into(),
max_level,
});
}
/// Find a verifier by address.
pub fn find_by_address(&self, address: &[u8; 16]) -> Option<&TrustedVerifier> {
self.verifiers.iter().find(|v| &v.address == address)
}
/// Verify a verification against known trusted verifiers.
/// Returns the effective level (or 0 if not trusted).
pub fn check(&self, verification: &Verification, subject_address: &[u8; 16]) -> u8 {
let Some(verifier) = self.find_by_address(&verification.verifier_address) else {
return 0;
};
// Level cannot exceed verifier's max
let claimed_level = verification.level.min(verifier.max_level as u8);
// Actually verify the signature
if verification.verify(&verifier.public_key, subject_address) {
claimed_level
} else {
0
}
}
/// Get the highest trusted verification level from a list.
pub fn highest_level(
&self,
verifications: &[Verification],
subject_address: &[u8; 16],
) -> VerificationLevel {
verifications
.iter()
.map(|v| self.check(v, subject_address))
.max()
.map(VerificationLevel::from)
.unwrap_or(VerificationLevel::None)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn peer_endorsement_roundtrip() {
let verifier = ServiceIdentity::generate();
let subject_address = [1u8; 16];
let v = Verification::peer_endorsement(&verifier, &subject_address, "good_actor");
assert!(v.verify(&verifier.public_key(), &subject_address));
assert_eq!(v.level, VerificationLevel::PeerEndorsed as u8);
}
#[test]
fn trusted_verifiers_check() {
let verifier = ServiceIdentity::generate();
let subject_address = [2u8; 16];
let mut trusted = TrustedVerifiers::new();
trusted.add(verifier.public_key(), "Test Registry", VerificationLevel::RegistryVerified);
let v = Verification::registry(&verifier, &subject_address, "licensed", "REG-12345");
let level = trusted.check(&v, &subject_address);
assert_eq!(level, VerificationLevel::RegistryVerified as u8);
}
#[test]
fn untrusted_verifier_returns_zero() {
let verifier = ServiceIdentity::generate();
let subject_address = [3u8; 16];
let trusted = TrustedVerifiers::new(); // Empty
let v = Verification::registry(&verifier, &subject_address, "licensed", "REG-999");
let level = trusted.check(&v, &subject_address);
assert_eq!(level, 0);
}
#[test]
fn expired_verification_fails() {
let verifier = ServiceIdentity::generate();
let subject_address = [4u8; 16];
let v = Verification::peer_endorsement(&verifier, &subject_address, "trusted")
.with_expiry(1); // Expired in 1970
assert!(!v.verify(&verifier.public_key(), &subject_address));
}
}

View File

@@ -0,0 +1,259 @@
//! Wire format for service messages.
//!
//! Binary format for efficient network transmission.
//! Uses CBOR for payload encoding.
use std::io::{Cursor, Read};
use crate::error::ServiceError;
use crate::message::{MessageType, ServiceMessage};
/// Wire message header (fixed 64 bytes).
///
/// ```text
/// ┌─────────────────────────────────────────────────────┐
/// │ 0-3 │ service_id (u32 LE) │
/// │ 4 │ message_type (u8) │
/// │ 5 │ version (u8) │
/// │ 6-7 │ flags (u16 LE, reserved) │
/// │ 8-23 │ message_id (16 bytes) │
/// │ 24-39 │ sender_address (16 bytes) │
/// │ 40-47 │ sequence (u64 LE) │
/// │ 48-49 │ ttl_hours (u16 LE) │
/// │ 50-57 │ timestamp (u64 LE) │
/// │ 58 │ hop_count (u8) │
/// │ 59 │ max_hops (u8) │
/// │ 60-63 │ payload_len (u32 LE) │
/// └─────────────────────────────────────────────────────┘
/// Followed by:
/// │ 64-... │ signature (64 bytes) │
/// │ signature_end-.. │ payload (payload_len bytes) │
/// │ payload_end-.. │ verifications (CBOR, optional) │
/// ```
const HEADER_SIZE: usize = 64;
const SIGNATURE_SIZE: usize = 64;
/// Encode a ServiceMessage to bytes.
pub fn encode(msg: &ServiceMessage) -> Result<Vec<u8>, ServiceError> {
let verifications_bytes = if msg.verifications.is_empty() {
Vec::new()
} else {
let mut buf = Vec::new();
ciborium::into_writer(&msg.verifications, &mut buf)?;
buf
};
let total_size = HEADER_SIZE + SIGNATURE_SIZE + msg.payload.len() + verifications_bytes.len();
let mut buf = Vec::with_capacity(total_size);
// Header
buf.extend_from_slice(&msg.service_id.to_le_bytes()); // 0-3
buf.push(msg.message_type as u8); // 4
buf.push(msg.version); // 5
buf.extend_from_slice(&0u16.to_le_bytes()); // 6-7 flags (reserved)
buf.extend_from_slice(&msg.id); // 8-23
buf.extend_from_slice(&msg.sender_address); // 24-39
buf.extend_from_slice(&msg.sequence.to_le_bytes()); // 40-47
buf.extend_from_slice(&msg.ttl_hours.to_le_bytes()); // 48-49
buf.extend_from_slice(&msg.timestamp.to_le_bytes()); // 50-57
buf.push(msg.hop_count); // 58
buf.push(msg.max_hops); // 59
buf.extend_from_slice(&(msg.payload.len() as u32).to_le_bytes()); // 60-63
// Signature
if msg.signature.len() != SIGNATURE_SIZE {
return Err(ServiceError::InvalidFormat(format!(
"signature must be {} bytes, got {}",
SIGNATURE_SIZE,
msg.signature.len()
)));
}
buf.extend_from_slice(&msg.signature);
// Payload
buf.extend_from_slice(&msg.payload);
// Verifications (optional)
buf.extend_from_slice(&verifications_bytes);
Ok(buf)
}
/// Decode bytes to a ServiceMessage.
pub fn decode(data: &[u8]) -> Result<ServiceMessage, ServiceError> {
if data.len() < HEADER_SIZE + SIGNATURE_SIZE {
return Err(ServiceError::InvalidFormat("message too short".into()));
}
let mut cursor = Cursor::new(data);
let mut buf4 = [0u8; 4];
let mut buf8 = [0u8; 8];
let mut buf16 = [0u8; 16];
let mut buf2 = [0u8; 2];
// Read header
cursor.read_exact(&mut buf4)?;
let service_id = u32::from_le_bytes(buf4);
let mut type_byte = [0u8; 1];
cursor.read_exact(&mut type_byte)?;
let message_type = MessageType::try_from(type_byte[0])
.map_err(|_| ServiceError::InvalidFormat("invalid message type".into()))?;
cursor.read_exact(&mut type_byte)?;
let version = type_byte[0];
cursor.read_exact(&mut buf2)?; // flags (ignored)
cursor.read_exact(&mut buf16)?;
let id = buf16;
cursor.read_exact(&mut buf16)?;
let sender_address = buf16;
cursor.read_exact(&mut buf8)?;
let sequence = u64::from_le_bytes(buf8);
cursor.read_exact(&mut buf2)?;
let ttl_hours = u16::from_le_bytes(buf2);
cursor.read_exact(&mut buf8)?;
let timestamp = u64::from_le_bytes(buf8);
cursor.read_exact(&mut type_byte)?;
let hop_count = type_byte[0];
cursor.read_exact(&mut type_byte)?;
let max_hops = type_byte[0];
cursor.read_exact(&mut buf4)?;
let payload_len = u32::from_le_bytes(buf4) as usize;
// Read signature
let mut signature = vec![0u8; SIGNATURE_SIZE];
cursor.read_exact(&mut signature)?;
// Read payload
if data.len() < HEADER_SIZE + SIGNATURE_SIZE + payload_len {
return Err(ServiceError::InvalidFormat("payload truncated".into()));
}
let mut payload = vec![0u8; payload_len];
cursor.read_exact(&mut payload)?;
// Read verifications (remaining bytes)
let verifications = if cursor.position() < data.len() as u64 {
let mut remaining = Vec::new();
cursor.read_to_end(&mut remaining)?;
if remaining.is_empty() {
Vec::new()
} else {
ciborium::from_reader(&remaining[..])
.map_err(|e| ServiceError::Serialization(e.to_string()))?
}
} else {
Vec::new()
};
Ok(ServiceMessage {
service_id,
message_type,
version,
id,
sender_address,
payload,
signature,
verifications,
sequence,
ttl_hours,
timestamp,
hop_count,
max_hops,
})
}
// Implement std::io::Error conversion for Read trait
impl From<std::io::Error> for ServiceError {
fn from(e: std::io::Error) -> Self {
ServiceError::InvalidFormat(e.to_string())
}
}
/// Encode a payload struct to CBOR.
pub fn encode_payload<T: serde::Serialize>(payload: &T) -> Result<Vec<u8>, ServiceError> {
let mut buf = Vec::new();
ciborium::into_writer(payload, &mut buf)?;
Ok(buf)
}
/// Decode a payload from CBOR.
pub fn decode_payload<T: serde::de::DeserializeOwned>(data: &[u8]) -> Result<T, ServiceError> {
ciborium::from_reader(data).map_err(|e| ServiceError::Serialization(e.to_string()))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::identity::ServiceIdentity;
use crate::service_ids::FAPP;
use crate::verification::Verification;
#[test]
fn roundtrip_simple() {
let id = ServiceIdentity::generate();
let msg = ServiceMessage::announce(&id, FAPP, b"hello world".to_vec(), 42);
let encoded = encode(&msg).unwrap();
let decoded = decode(&encoded).unwrap();
assert_eq!(decoded.service_id, FAPP);
assert_eq!(decoded.message_type, MessageType::Announce);
assert_eq!(decoded.sequence, 42);
assert_eq!(decoded.payload, b"hello world");
assert_eq!(decoded.signature, msg.signature);
}
#[test]
fn roundtrip_with_verifications() {
let id = ServiceIdentity::generate();
let verifier = ServiceIdentity::generate();
let mut msg = ServiceMessage::announce(&id, FAPP, b"payload".to_vec(), 1);
msg.add_verification(Verification::peer_endorsement(
&verifier,
&id.address(),
"trusted",
));
let encoded = encode(&msg).unwrap();
let decoded = decode(&encoded).unwrap();
assert_eq!(decoded.verifications.len(), 1);
assert_eq!(decoded.verifications[0].claim, "trusted");
}
#[test]
fn payload_codec() {
#[derive(serde::Serialize, serde::Deserialize, Debug, PartialEq)]
struct TestPayload {
name: String,
value: i32,
}
let payload = TestPayload {
name: "test".into(),
value: 123,
};
let encoded = encode_payload(&payload).unwrap();
let decoded: TestPayload = decode_payload(&encoded).unwrap();
assert_eq!(payload, decoded);
}
#[test]
fn truncated_rejected() {
let result = decode(&[0u8; 10]);
assert!(matches!(result, Err(ServiceError::InvalidFormat(_))));
}
}

View File

@@ -1,18 +1,19 @@
[package]
name = "quicproquo-client"
name = "quicprochat-client"
version = "0.1.0"
edition = "2021"
description = "CLI client for quicproquo."
license = "MIT"
edition.workspace = true
description = "CLI client for quicprochat."
license = "Apache-2.0 OR MIT"
repository.workspace = true
[[bin]]
name = "qpq"
name = "qpc"
path = "src/main.rs"
[dependencies]
quicproquo-core = { path = "../quicproquo-core" }
quicproquo-proto = { path = "../quicproquo-proto" }
quicproquo-kt = { path = "../quicproquo-kt" }
quicprochat-core = { path = "../quicprochat-core" }
quicprochat-proto = { path = "../quicprochat-proto" }
quicprochat-kt = { path = "../quicprochat-kt" }
openmls_rust_crypto = { workspace = true }
# Serialisation + RPC
@@ -49,8 +50,9 @@ rustls = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
# CLI
# CLI + config
clap = { workspace = true }
toml = { workspace = true }
# Local message/conversation storage
rusqlite = { workspace = true }
@@ -65,7 +67,7 @@ rpassword = "5"
mdns-sd = { version = "0.12", optional = true }
# Optional P2P transport for direct node-to-node messaging.
quicproquo-p2p = { path = "../quicproquo-p2p", optional = true }
quicprochat-p2p = { path = "../quicprochat-p2p", optional = true }
# Optional TUI dependencies (Ratatui full-screen interface).
ratatui = { version = "0.29", optional = true, default-features = false, features = ["crossterm"] }
@@ -74,9 +76,9 @@ crossterm = { version = "0.28", optional = true }
# YAML playbook parsing (only compiled with --features playbook).
serde_yaml = { version = "0.9", optional = true }
# v2 SDK-based CLI (thin shell over quicproquo-sdk).
quicproquo-sdk = { path = "../quicproquo-sdk", optional = true }
quicproquo-rpc = { path = "../quicproquo-rpc", optional = true }
# v2 SDK-based CLI (thin shell over quicprochat-sdk).
quicprochat-sdk = { path = "../quicprochat-sdk", optional = true }
quicprochat-rpc = { path = "../quicprochat-rpc", optional = true }
rustyline = { workspace = true, optional = true }
[lints]
@@ -84,15 +86,15 @@ workspace = true
[features]
# Enable mesh-mode features: mDNS local peer discovery + P2P transport.
# Build: cargo build -p quicproquo-client --features mesh
mesh = ["dep:mdns-sd", "dep:quicproquo-p2p"]
# Enable full-screen Ratatui TUI: cargo build -p quicproquo-client --features tui
# Build: cargo build -p quicprochat-client --features mesh
mesh = ["dep:mdns-sd", "dep:quicprochat-p2p"]
# Enable full-screen Ratatui TUI: cargo build -p quicprochat-client --features tui
tui = ["dep:ratatui", "dep:crossterm"]
# Enable playbook (scripted command execution): YAML parser + serde derives.
# Build: cargo build -p quicproquo-client --features playbook
# Build: cargo build -p quicprochat-client --features playbook
playbook = ["dep:serde_yaml"]
# v2 CLI over SDK: cargo build -p quicproquo-client --features v2
v2 = ["dep:quicproquo-sdk", "dep:quicproquo-rpc", "dep:rustyline"]
# v2 CLI over SDK: cargo build -p quicprochat-client --features v2
v2 = ["dep:quicprochat-sdk", "dep:quicprochat-rpc", "dep:rustyline"]
[dev-dependencies]
dashmap = { workspace = true }

View File

@@ -6,7 +6,7 @@
use std::collections::HashMap;
use quicproquo_proto::node_capnp::node_service;
use quicprochat_proto::node_capnp::node_service;
use super::repl::{Input, SlashCommand, parse_input};
use super::session::SessionState;
@@ -109,6 +109,8 @@ pub enum Command {
History { count: usize },
// Mesh
MeshStart,
MeshStop,
MeshPeers,
MeshServer { addr: String },
MeshSend { peer_id: String, message: String },
@@ -117,6 +119,8 @@ pub enum Command {
MeshRoute,
MeshIdentity,
MeshStore,
MeshTrace { address: String },
MeshStats,
// Security / crypto
Verify { username: String },
@@ -171,6 +175,8 @@ impl Command {
Command::GroupInfo => Some(SlashCommand::GroupInfo),
Command::Rename { name } => Some(SlashCommand::Rename { name }),
Command::History { count } => Some(SlashCommand::History { count }),
Command::MeshStart => Some(SlashCommand::MeshStart),
Command::MeshStop => Some(SlashCommand::MeshStop),
Command::MeshPeers => Some(SlashCommand::MeshPeers),
Command::MeshServer { addr } => Some(SlashCommand::MeshServer { addr }),
Command::MeshSend { peer_id, message } => {
@@ -183,6 +189,8 @@ impl Command {
Command::MeshRoute => Some(SlashCommand::MeshRoute),
Command::MeshIdentity => Some(SlashCommand::MeshIdentity),
Command::MeshStore => Some(SlashCommand::MeshStore),
Command::MeshTrace { address } => Some(SlashCommand::MeshTrace { address }),
Command::MeshStats => Some(SlashCommand::MeshStats),
Command::Verify { username } => Some(SlashCommand::Verify { username }),
Command::UpdateKey => Some(SlashCommand::UpdateKey),
Command::Typing => Some(SlashCommand::Typing),
@@ -332,6 +340,8 @@ fn slash_to_command(sc: SlashCommand) -> Command {
SlashCommand::GroupInfo => Command::GroupInfo,
SlashCommand::Rename { name } => Command::Rename { name },
SlashCommand::History { count } => Command::History { count },
SlashCommand::MeshStart => Command::MeshStart,
SlashCommand::MeshStop => Command::MeshStop,
SlashCommand::MeshPeers => Command::MeshPeers,
SlashCommand::MeshServer { addr } => Command::MeshServer { addr },
SlashCommand::MeshSend { peer_id, message } => Command::MeshSend { peer_id, message },
@@ -342,6 +352,8 @@ fn slash_to_command(sc: SlashCommand) -> Command {
SlashCommand::MeshRoute => Command::MeshRoute,
SlashCommand::MeshIdentity => Command::MeshIdentity,
SlashCommand::MeshStore => Command::MeshStore,
SlashCommand::MeshTrace { address } => Command::MeshTrace { address },
SlashCommand::MeshStats => Command::MeshStats,
SlashCommand::Verify { username } => Command::Verify { username },
SlashCommand::UpdateKey => Command::UpdateKey,
SlashCommand::Typing => Command::Typing,
@@ -394,6 +406,8 @@ async fn execute_slash(
SlashCommand::GroupInfo => cmd_group_info(session, client).await,
SlashCommand::Rename { name } => cmd_rename(session, &name),
SlashCommand::History { count } => cmd_history(session, count),
SlashCommand::MeshStart => cmd_mesh_start(session).await,
SlashCommand::MeshStop => cmd_mesh_stop(session).await,
SlashCommand::MeshPeers => cmd_mesh_peers(),
SlashCommand::MeshServer { addr } => {
super::display::print_status(&format!(
@@ -401,12 +415,14 @@ async fn execute_slash(
));
Ok(())
}
SlashCommand::MeshSend { peer_id, message } => cmd_mesh_send(&peer_id, &message),
SlashCommand::MeshBroadcast { topic, message } => cmd_mesh_broadcast(&topic, &message),
SlashCommand::MeshSubscribe { topic } => cmd_mesh_subscribe(&topic),
SlashCommand::MeshSend { peer_id, message } => cmd_mesh_send(session, &peer_id, &message).await,
SlashCommand::MeshBroadcast { topic, message } => cmd_mesh_broadcast(session, &topic, &message).await,
SlashCommand::MeshSubscribe { topic } => cmd_mesh_subscribe(session, &topic),
SlashCommand::MeshRoute => cmd_mesh_route(session),
SlashCommand::MeshIdentity => cmd_mesh_identity(session),
SlashCommand::MeshStore => cmd_mesh_store(session),
SlashCommand::MeshTrace { address } => cmd_mesh_trace(session, &address),
SlashCommand::MeshStats => cmd_mesh_stats(session),
SlashCommand::Verify { username } => cmd_verify(session, client, &username).await,
SlashCommand::UpdateKey => cmd_update_key(session, client).await,
SlashCommand::Typing => cmd_typing(session, client).await,

View File

@@ -5,7 +5,7 @@ use opaque_ke::{
ClientLogin, ClientLoginFinishParameters, ClientRegistration,
ClientRegistrationFinishParameters, CredentialResponse, RegistrationResponse,
};
use quicproquo_core::{
use quicprochat_core::{
generate_key_package, hybrid_decrypt, hybrid_encrypt, opaque_auth::OpaqueSuite,
GroupMember, HybridKeypair, IdentityKeypair, ReceivedMessage,
};
@@ -317,7 +317,7 @@ fn derive_identity_for_login(
/// The error message contains "E018" if the user already exists.
/// Does NOT require init_auth() — OPAQUE RPCs are unauthenticated.
pub(crate) async fn opaque_register(
client: &quicproquo_proto::node_capnp::node_service::Client,
client: &quicprochat_proto::node_capnp::node_service::Client,
username: &str,
password: &str,
identity_key: Option<&[u8]>,
@@ -378,7 +378,7 @@ pub(crate) async fn opaque_register(
/// Perform OPAQUE login and return the raw session token bytes.
/// Does NOT require init_auth() — OPAQUE RPCs are unauthenticated.
pub async fn opaque_login(
client: &quicproquo_proto::node_capnp::node_service::Client,
client: &quicprochat_proto::node_capnp::node_service::Client,
username: &str,
password: &str,
identity_key: &[u8],
@@ -647,8 +647,8 @@ pub async fn cmd_fetch_key(
/// Run a two-party MLS demo against the unified server.
pub async fn cmd_demo_group(server: &str, ca_cert: &Path, server_name: &str) -> anyhow::Result<()> {
let creator_state_path = PathBuf::from("qpq-demo-creator.bin");
let joiner_state_path = PathBuf::from("qpq-demo-joiner.bin");
let creator_state_path = PathBuf::from("qpc-demo-creator.bin");
let joiner_state_path = PathBuf::from("qpc-demo-joiner.bin");
let (mut creator, creator_hybrid_opt) =
load_or_init_state(&creator_state_path, None)?.into_parts(&creator_state_path)?;
@@ -1298,7 +1298,7 @@ pub async fn cmd_chat(
///
/// `conv_db` is the path to the conversation SQLite database (`.convdb` file).
/// `conv_id_hex` is the 32-hex-character conversation ID to export.
/// `output` is the path for the `.qpqt` transcript file to write.
/// `output` is the path for the `.qpct` transcript file to write.
/// `transcript_password` is used to derive the encryption key (Argon2id).
/// `db_password` is the optional SQLCipher password for the conversation database.
pub fn cmd_export(
@@ -1308,7 +1308,7 @@ pub fn cmd_export(
transcript_password: &str,
db_password: Option<&str>,
) -> anyhow::Result<()> {
use quicproquo_core::{TranscriptRecord, TranscriptWriter};
use quicprochat_core::{TranscriptRecord, TranscriptWriter};
use super::conversation::{ConversationId, ConversationStore};
// Decode conversation ID from hex.
@@ -1367,7 +1367,7 @@ pub fn cmd_export(
conv.display_name,
output.display()
);
println!("Decrypt with: qpq export verify --input <file> --password <password>");
println!("Decrypt with: qpc export verify --input <file> --password <password>");
Ok(())
}
@@ -1376,7 +1376,7 @@ pub fn cmd_export(
///
/// Prints a summary. Does not require the encryption password (structural check only).
pub fn cmd_export_verify(input: &Path) -> anyhow::Result<()> {
use quicproquo_core::{validate_transcript_structure, ChainVerdict};
use quicprochat_core::{validate_transcript_structure, ChainVerdict};
let data = std::fs::read(input)
.with_context(|| format!("read transcript file '{}'", input.display()))?;

View File

@@ -1,6 +1,6 @@
//! mDNS-based peer discovery for Freifunk / community mesh deployments.
//!
//! Browse for `_quicproquo._udp.local.` services on the local network and
//! Browse for `_quicprochat._udp.local.` services on the local network and
//! surface them as [`DiscoveredPeer`] structs. Servers announce themselves
//! automatically on startup; this module lets clients find them without manual
//! configuration.
@@ -8,7 +8,7 @@
//! # Usage
//!
//! ```no_run
//! use quicproquo_client::client::mesh_discovery::MeshDiscovery;
//! use quicprochat_client::client::mesh_discovery::MeshDiscovery;
//!
//! let disc = MeshDiscovery::start()?;
//! // Give mDNS time to collect announcements before reading.
@@ -16,7 +16,7 @@
//! for peer in disc.peers() {
//! println!("found: {} at {}", peer.domain, peer.server_addr);
//! }
//! # Ok::<(), quicproquo_client::client::mesh_discovery::MeshDiscoveryError>(())
//! # Ok::<(), quicprochat_client::client::mesh_discovery::MeshDiscoveryError>(())
//! ```
#[cfg(feature = "mesh")]
@@ -27,7 +27,7 @@ use std::sync::{Arc, Mutex};
#[cfg(feature = "mesh")]
use std::collections::HashMap;
/// A qpq server discovered on the local network via mDNS.
/// A qpc server discovered on the local network via mDNS.
#[derive(Debug, Clone)]
pub struct DiscoveredPeer {
/// Federation domain of the remote server (e.g. `"node1.freifunk.net"`).
@@ -57,7 +57,7 @@ pub enum MeshDiscoveryError {
}
impl MeshDiscovery {
/// Start browsing for `_quicproquo._udp.local.` services.
/// Start browsing for `_quicprochat._udp.local.` services.
///
/// Returns immediately; peers are collected in the background.
/// Returns [`MeshDiscoveryError::FeatureDisabled`] when built without the
@@ -79,7 +79,7 @@ impl MeshDiscovery {
.map_err(|e| MeshDiscoveryError::DaemonError(e.to_string()))?;
let receiver = daemon
.browse("_quicproquo._udp.local.")
.browse("_quicprochat._udp.local.")
.map_err(|e| MeshDiscoveryError::BrowseError(e.to_string()))?;
let peers: Arc<Mutex<HashMap<String, DiscoveredPeer>>> =
@@ -91,7 +91,7 @@ impl MeshDiscovery {
for event in receiver {
match event {
ServiceEvent::ServiceResolved(info) => {
// Extract the qpq server address from TXT records.
// Extract the qpc server address from TXT records.
let server_addr_str = info
.get_property_val_str("server")
.map(|s| s.to_string());

View File

@@ -24,7 +24,7 @@ use std::path::Path;
use std::time::{Duration, Instant};
use anyhow::{Context, bail};
use quicproquo_proto::node_capnp::node_service;
use quicprochat_proto::node_capnp::node_service;
use serde::{Deserialize, Serialize};
use super::command_engine::{AssertCondition, CmpOp, Command, CommandRegistry};
@@ -434,6 +434,10 @@ impl PlaybookRunner {
"mesh-route" => Ok(Command::MeshRoute),
"mesh-identity" | "mesh-id" => Ok(Command::MeshIdentity),
"mesh-store" => Ok(Command::MeshStore),
"mesh-trace" => Ok(Command::MeshTrace {
address: self.resolve_str(&step.args, "address")?,
}),
"mesh-stats" => Ok(Command::MeshStats),
other => bail!("unknown command: {other}"),
}

View File

@@ -9,13 +9,13 @@ use std::sync::Arc;
use std::time::Duration;
use anyhow::Context;
use quicproquo_core::{
use quicprochat_core::{
AppMessage, DiskKeyStore, GroupMember, IdentityKeypair, ReceivedMessage,
compute_safety_number, hybrid_encrypt, parse as parse_app_msg, serialize_chat,
serialize_delete, serialize_dummy, serialize_edit, serialize_file_ref, serialize_reaction,
serialize_read_receipt, serialize_typing,
};
use quicproquo_proto::node_capnp::node_service;
use quicprochat_proto::node_capnp::node_service;
use tokio::sync::mpsc;
use tokio::time::interval;
@@ -60,6 +60,8 @@ pub(crate) enum SlashCommand {
Rename { name: String },
History { count: usize },
/// Mesh subcommands: /mesh peers, /mesh server <addr>, etc.
MeshStart,
MeshStop,
MeshPeers,
MeshServer { addr: String },
MeshSend { peer_id: String, message: String },
@@ -68,6 +70,8 @@ pub(crate) enum SlashCommand {
MeshRoute,
MeshIdentity,
MeshStore,
MeshTrace { address: String },
MeshStats,
/// Display safety number for out-of-band key verification with a contact.
Verify { username: String },
/// Rotate own MLS leaf key in the active group.
@@ -173,6 +177,8 @@ pub(crate) fn parse_input(line: &str) -> Input {
Input::Slash(SlashCommand::History { count })
}
"/mesh" => match arg.as_deref() {
Some("start") => Input::Slash(SlashCommand::MeshStart),
Some("stop") => Input::Slash(SlashCommand::MeshStop),
Some("peers") => Input::Slash(SlashCommand::MeshPeers),
Some(rest) if rest.starts_with("server ") => {
let addr = rest.trim_start_matches("server ").trim().to_string();
@@ -216,12 +222,22 @@ pub(crate) fn parse_input(line: &str) -> Input {
Input::Slash(SlashCommand::MeshSubscribe { topic: topic.into() })
}
}
Some("route") => Input::Slash(SlashCommand::MeshRoute),
Some("route") | Some("routes") => Input::Slash(SlashCommand::MeshRoute),
Some("identity") | Some("id") => Input::Slash(SlashCommand::MeshIdentity),
Some("store") => Input::Slash(SlashCommand::MeshStore),
Some("stats") => Input::Slash(SlashCommand::MeshStats),
Some(rest) if rest.starts_with("trace ") => {
let address = rest[6..].trim();
if address.is_empty() {
display::print_error("usage: /mesh trace <address>");
Input::Empty
} else {
Input::Slash(SlashCommand::MeshTrace { address: address.into() })
}
}
_ => {
display::print_error(
"usage: /mesh peers|server|send|broadcast|subscribe|route|identity|store"
"usage: /mesh start|stop|peers|server|send|broadcast|subscribe|route|identity|store|trace|stats"
);
Input::Empty
}
@@ -355,10 +371,10 @@ fn derive_key_path(cert_path: &Path) -> PathBuf {
cert_path.with_file_name(key_name)
}
/// Find the `qpq-server` binary: same directory as current exe, then PATH.
/// Find the `qpc-server` binary: same directory as current exe, then PATH.
fn find_server_binary() -> Option<PathBuf> {
if let Ok(exe) = std::env::current_exe() {
let sibling = exe.with_file_name("qpq-server");
let sibling = exe.with_file_name("qpc-server");
if sibling.exists() {
return Some(sibling);
}
@@ -366,7 +382,7 @@ fn find_server_binary() -> Option<PathBuf> {
// Fall back to PATH lookup.
std::env::var_os("PATH").and_then(|paths| {
std::env::split_paths(&paths)
.map(|dir| dir.join("qpq-server"))
.map(|dir| dir.join("qpc-server"))
.find(|p| p.exists())
})
}
@@ -400,13 +416,13 @@ async fn ensure_server(
if ca_cert.exists() {
// Cert exists but connection failed and no binary found.
anyhow::bail!(
"server at {server} is not reachable and qpq-server binary not found; \
start a server manually or install qpq-server"
"server at {server} is not reachable and qpc-server binary not found; \
start a server manually or install qpc-server"
);
} else {
anyhow::bail!(
"no server running and qpq-server binary not found; \
start a server manually or install qpq-server"
"no server running and qpc-server binary not found; \
start a server manually or install qpc-server"
);
}
}
@@ -445,7 +461,7 @@ async fn ensure_server(
if start.elapsed() > max_wait {
anyhow::bail!(
"auto-started qpq-server but it did not become ready within {max_wait:?}"
"auto-started qpc-server but it did not become ready within {max_wait:?}"
);
}
@@ -804,6 +820,8 @@ async fn handle_slash(
SlashCommand::GroupInfo => cmd_group_info(session, client).await,
SlashCommand::Rename { name } => cmd_rename(session, &name),
SlashCommand::History { count } => cmd_history(session, count),
SlashCommand::MeshStart => cmd_mesh_start(session).await,
SlashCommand::MeshStop => cmd_mesh_stop(session).await,
SlashCommand::MeshPeers => cmd_mesh_peers(),
SlashCommand::MeshServer { addr } => {
display::print_status(&format!(
@@ -811,12 +829,14 @@ async fn handle_slash(
));
Ok(())
}
SlashCommand::MeshSend { peer_id, message } => cmd_mesh_send(&peer_id, &message),
SlashCommand::MeshBroadcast { topic, message } => cmd_mesh_broadcast(&topic, &message),
SlashCommand::MeshSubscribe { topic } => cmd_mesh_subscribe(&topic),
SlashCommand::MeshSend { peer_id, message } => cmd_mesh_send(session, &peer_id, &message).await,
SlashCommand::MeshBroadcast { topic, message } => cmd_mesh_broadcast(session, &topic, &message).await,
SlashCommand::MeshSubscribe { topic } => cmd_mesh_subscribe(session, &topic),
SlashCommand::MeshRoute => cmd_mesh_route(session),
SlashCommand::MeshIdentity => cmd_mesh_identity(session),
SlashCommand::MeshStore => cmd_mesh_store(session),
SlashCommand::MeshTrace { address } => cmd_mesh_trace(session, &address),
SlashCommand::MeshStats => cmd_mesh_stats(session),
SlashCommand::Verify { username } => cmd_verify(session, client, &username).await,
SlashCommand::UpdateKey => cmd_update_key(session, client).await,
SlashCommand::Typing => cmd_typing(session, client).await,
@@ -862,7 +882,9 @@ pub(crate) fn print_help() {
display::print_status(" /rename <name> - Rename the current conversation");
display::print_status(" /history [N] - Show last N messages (default: 20)");
display::print_status(" /whoami - Show your identity");
display::print_status(" /mesh peers - Discover nearby qpq nodes via mDNS");
display::print_status(" /mesh start - Start the P2P node for direct messaging");
display::print_status(" /mesh stop - Stop the P2P node");
display::print_status(" /mesh peers - Discover nearby qpc nodes via mDNS");
display::print_status(" /mesh server <host:port> - Show how to reconnect to a mesh node");
display::print_status(" /mesh send <peer> <msg> - Send a P2P message to a mesh peer");
display::print_status(" /mesh broadcast <topic> <m> - Broadcast an encrypted message on a topic");
@@ -870,6 +892,8 @@ pub(crate) fn print_help() {
display::print_status(" /mesh route - Show known mesh peers and routes");
display::print_status(" /mesh identity - Show mesh node identity info");
display::print_status(" /mesh store - Show mesh store-and-forward stats");
display::print_status(" /mesh trace <address> - Show route to a mesh address");
display::print_status(" /mesh stats - Show delivery statistics per destination");
display::print_status(" /update-key - Rotate your MLS leaf key in the active group");
display::print_status(" /verify <username> - Show safety number for key verification");
display::print_status(" /react <emoji> [index] - React to last message (or message at index)");
@@ -1099,7 +1123,7 @@ pub(crate) async fn cmd_rotate_all_keys(
cmd_update_key(session, client).await?;
// Step 2: Generate new hybrid KEM keypair and upload.
let new_kp = quicproquo_core::HybridKeypair::generate();
let new_kp = quicprochat_core::HybridKeypair::generate();
let id_key = session.identity.public_key_bytes();
upload_hybrid_key(client, &id_key, &new_kp.public_key()).await?;
session.hybrid_kp = Some(new_kp);
@@ -1108,7 +1132,95 @@ pub(crate) async fn cmd_rotate_all_keys(
Ok(())
}
/// Discover nearby qpq servers via mDNS (requires `--features mesh` build).
/// Start the P2P node for mesh messaging.
pub(crate) async fn cmd_mesh_start(session: &mut SessionState) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
if session.p2p_node.is_some() {
display::print_status("P2P node is already running");
return Ok(());
}
display::print_status("starting P2P node...");
// Try to load a persisted mesh identity or generate a new one.
let mesh_state_path = session.state_path.with_extension("mesh.json");
let mesh_id = if mesh_state_path.exists() {
match quicprochat_p2p::identity::MeshIdentity::load(&mesh_state_path) {
Ok(id) => {
display::print_status("loaded existing mesh identity");
Some(id)
}
Err(e) => {
display::print_status(&format!("could not load mesh identity: {e}, generating new"));
None
}
}
} else {
None
};
let node = if let Some(id) = mesh_id {
match quicprochat_p2p::P2pNode::start_with_mesh(None, id, 1000).await {
Ok(n) => n,
Err(e) => {
display::print_error(&format!("failed to start P2P node: {e}"));
return Ok(());
}
}
} else {
match quicprochat_p2p::P2pNode::start(None).await {
Ok(n) => n,
Err(e) => {
display::print_error(&format!("failed to start P2P node: {e}"));
return Ok(());
}
}
};
let node_id = node.node_id();
session.p2p_node = Some(Arc::new(node));
display::print_status(&format!("P2P node started: {}", node_id.fmt_short()));
}
#[cfg(not(feature = "mesh"))]
{
let _ = session;
display::print_error("requires --features mesh");
}
Ok(())
}
/// Stop the P2P node.
pub(crate) async fn cmd_mesh_stop(session: &mut SessionState) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
match session.p2p_node.take() {
Some(node) => {
// Try to unwrap the Arc; if there are other references, just drop our handle.
match Arc::try_unwrap(node) {
Ok(owned) => {
owned.close().await;
display::print_status("P2P node stopped");
}
Err(_arc) => {
display::print_status("P2P node reference released (other tasks may still hold it)");
}
}
}
None => {
display::print_status("P2P node is not running");
}
}
}
#[cfg(not(feature = "mesh"))]
{
let _ = session;
display::print_error("requires --features mesh");
}
Ok(())
}
/// Discover nearby qpc servers via mDNS (requires `--features mesh` build).
pub(crate) fn cmd_mesh_peers() -> anyhow::Result<()> {
use super::mesh_discovery::MeshDiscovery;
@@ -1118,65 +1230,117 @@ pub(crate) fn cmd_mesh_peers() -> anyhow::Result<()> {
return Ok(());
}
Ok(disc) => {
display::print_status("scanning for nearby qpq nodes (2s)...");
display::print_status("scanning for nearby qpc nodes (2s)...");
// Block briefly to collect mDNS announcements from the local network.
std::thread::sleep(std::time::Duration::from_secs(2));
let peers = disc.peers();
if peers.is_empty() {
display::print_status("no qpq nodes found on the local network");
display::print_status("no qpc nodes found on the local network");
} else {
display::print_status(&format!("found {} node(s):", peers.len()));
for p in &peers {
display::print_status(&format!(" {} at {}", p.domain, p.server_addr));
}
display::print_status("use: /mesh server <host:port> to note the address,");
display::print_status("then reconnect with: qpq --server <host:port>");
display::print_status("then reconnect with: qpc --server <host:port>");
}
}
}
Ok(())
}
/// Send a direct P2P mesh message (stub — P2pNode not yet wired into session).
pub(crate) fn cmd_mesh_send(peer_id: &str, message: &str) -> anyhow::Result<()> {
/// Send a direct P2P mesh message via the session's P2P node.
pub(crate) async fn cmd_mesh_send(session: &SessionState, peer_id: &str, message: &str) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
display::print_status(&format!("mesh send: would send to {peer_id}: {message}"));
display::print_status("(P2P node integration pending — message not actually sent)");
match &session.p2p_node {
Some(node) => {
// Parse the peer_id as an iroh PublicKey hex string and create an EndpointAddr.
let pk_bytes = hex::decode(peer_id)
.map_err(|e| anyhow::anyhow!("invalid peer_id hex: {e}"))?;
let pk_array: [u8; 32] = pk_bytes
.as_slice()
.try_into()
.map_err(|_| anyhow::anyhow!("peer_id must be 32 bytes (64 hex chars)"))?;
let pk = iroh::PublicKey::from_bytes(&pk_array);
let addr = iroh::EndpointAddr::from(pk);
match node.send(addr, message.as_bytes()).await {
Ok(()) => {
display::print_status(&format!("sent to {}: {message}", &peer_id[..8.min(peer_id.len())]));
}
Err(e) => {
display::print_error(&format!("P2P send failed: {e}"));
}
}
}
None => {
display::print_error("P2P node not started. Use /mesh start to initialize.");
}
}
}
#[cfg(not(feature = "mesh"))]
{
let _ = (peer_id, message);
let _ = (session, peer_id, message);
display::print_error("requires --features mesh");
}
Ok(())
}
/// Broadcast an encrypted message on a topic (stub — P2pNode not yet wired into session).
pub(crate) fn cmd_mesh_broadcast(topic: &str, message: &str) -> anyhow::Result<()> {
/// Broadcast an encrypted message on a topic via the session's P2P node.
pub(crate) async fn cmd_mesh_broadcast(session: &SessionState, topic: &str, message: &str) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
display::print_status(&format!("mesh broadcast to {topic}: {message}"));
display::print_status("(P2P node integration pending — message not actually sent)");
match &session.p2p_node {
Some(node) => {
match node.broadcast(topic, message.as_bytes()).await {
Ok(()) => {
display::print_status(&format!("broadcast to {topic}: {message}"));
}
Err(e) => {
display::print_error(&format!("broadcast failed: {e}"));
}
}
}
None => {
display::print_error("P2P node not started. Use /mesh start to initialize.");
}
}
}
#[cfg(not(feature = "mesh"))]
{
let _ = (topic, message);
let _ = (session, topic, message);
display::print_error("requires --features mesh");
}
Ok(())
}
/// Subscribe to a broadcast topic (stub — P2pNode not yet wired into session).
pub(crate) fn cmd_mesh_subscribe(topic: &str) -> anyhow::Result<()> {
/// Subscribe to a broadcast topic on the session's P2P node.
pub(crate) fn cmd_mesh_subscribe(session: &SessionState, topic: &str) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
match &session.p2p_node {
Some(node) => {
// Generate a random key for the subscription.
let key: [u8; 32] = rand::random();
match node.subscribe(topic, key) {
Ok(()) => {
display::print_status(&format!("subscribed to topic: {topic}"));
display::print_status("(P2P node integration pending — subscription is not persisted)");
display::print_status(&format!("share this key to let others join: {}", hex::encode(key)));
}
Err(e) => {
display::print_error(&format!("subscribe failed: {e}"));
}
}
}
None => {
display::print_error("P2P node not started. Use /mesh start to initialize.");
}
}
}
#[cfg(not(feature = "mesh"))]
{
let _ = topic;
let _ = (session, topic);
display::print_error("requires --features mesh");
}
Ok(())
@@ -1188,7 +1352,7 @@ pub(crate) fn cmd_mesh_route(session: &SessionState) -> anyhow::Result<()> {
{
let mesh_state_path = session.state_path.with_extension("mesh.json");
if mesh_state_path.exists() {
let id = quicproquo_p2p::identity::MeshIdentity::load(&mesh_state_path)?;
let id = quicprochat_p2p::identity::MeshIdentity::load(&mesh_state_path)?;
let peers = id.known_peers();
if peers.is_empty() {
display::print_status("no known mesh peers");
@@ -1222,7 +1386,7 @@ pub(crate) fn cmd_mesh_identity(session: &SessionState) -> anyhow::Result<()> {
{
let mesh_state_path = session.state_path.with_extension("mesh.json");
if mesh_state_path.exists() {
let id = quicproquo_p2p::identity::MeshIdentity::load(&mesh_state_path)?;
let id = quicprochat_p2p::identity::MeshIdentity::load(&mesh_state_path)?;
display::print_status(&format!("mesh public key: {}", hex::encode(id.public_key())));
display::print_status(&format!("known peers: {}", id.known_peers().len()));
} else {
@@ -1242,10 +1406,74 @@ pub(crate) fn cmd_mesh_identity(session: &SessionState) -> anyhow::Result<()> {
pub(crate) fn cmd_mesh_store(session: &SessionState) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
// Without a live P2pNode in the session, we can only report that the store
// is not active. Once P2pNode is wired in, this will show real stats.
display::print_status("mesh store: not active (P2P node not started in this session)");
display::print_status("start mesh mode to enable store-and-forward");
match &session.p2p_node {
Some(node) => {
let store = node.mesh_store();
let guard = store.lock().map_err(|e| anyhow::anyhow!("store lock: {e}"))?;
let (total_messages, unique_recipients) = guard.stats();
display::print_status(&format!("mesh store: {} messages for {} recipients", total_messages, unique_recipients));
}
None => {
display::print_status("mesh store: not active (P2P node not started)");
display::print_status("use /mesh start to enable store-and-forward");
}
}
}
#[cfg(not(feature = "mesh"))]
{
let _ = session;
display::print_error("requires --features mesh");
}
Ok(())
}
/// Show route to a mesh address.
pub(crate) fn cmd_mesh_trace(session: &SessionState, address: &str) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
// Parse the address (hex string to 16 bytes)
let addr_bytes = match hex::decode(address) {
Ok(b) if b.len() == 16 => {
let mut arr = [0u8; 16];
arr.copy_from_slice(&b);
arr
}
Ok(b) if b.len() == 32 => {
// Full public key — compute truncated address
quicprochat_p2p::announce::compute_address(&b)
}
_ => {
display::print_error("invalid address: expected 16-byte hex (32 chars) or 32-byte key (64 chars)");
return Ok(());
}
};
display::print_status(&format!("tracing route to {}", hex::encode(addr_bytes)));
// For now, show the route from the routing table if we had one
// In a full implementation, this would query the MeshRouter
display::print_status(" (routing table not yet wired to REPL session)");
display::print_status(" this will show hop-by-hop path once MeshRouter is integrated");
let _ = session;
}
#[cfg(not(feature = "mesh"))]
{
let _ = (session, address);
display::print_error("requires --features mesh");
}
Ok(())
}
/// Show delivery statistics per destination.
pub(crate) fn cmd_mesh_stats(session: &SessionState) -> anyhow::Result<()> {
#[cfg(feature = "mesh")]
{
// For now, report that stats are not available without MeshRouter
display::print_status("mesh delivery statistics:");
display::print_status(" (MeshRouter not yet wired to REPL session)");
display::print_status(" stats will show per-destination delivery counts once integrated");
let _ = session;
}
#[cfg(not(feature = "mesh"))]
@@ -1449,10 +1677,8 @@ pub(crate) async fn cmd_dm(
},
display_name: format!("@{username}"),
mls_group_blob: member
.group_ref()
.map(bincode::serialize)
.transpose()
.context("serialize group")?,
.serialize_mls_state()
.context("serialize MLS state")?,
keystore_blob: None,
member_keys,
unread_count: 0,
@@ -1493,10 +1719,8 @@ pub(crate) fn cmd_create_group(session: &mut SessionState, name: &str) -> anyhow
kind: ConversationKind::Group { name: name.to_string() },
display_name: format!("#{name}"),
mls_group_blob: member
.group_ref()
.map(bincode::serialize)
.transpose()
.context("serialize group")?,
.serialize_mls_state()
.context("serialize MLS state")?,
keystore_blob: None,
member_keys,
unread_count: 0,
@@ -1780,9 +2004,7 @@ pub(crate) async fn cmd_join(
kind: ConversationKind::Group { name: display.clone() },
display_name: format!("#{display}"),
mls_group_blob: new_member
.group_ref()
.map(bincode::serialize)
.transpose()
.serialize_mls_state()
.context("serialize joined group")?,
keystore_blob: None,
member_keys,
@@ -2005,8 +2227,8 @@ pub(crate) async fn cmd_typing(
);
let app_payload = serialize_typing(1);
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2082,8 +2304,8 @@ pub(crate) async fn cmd_react(
let app_payload = serialize_reaction(ref_msg_id, emoji.as_bytes())
.context("serialize reaction")?;
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2167,8 +2389,8 @@ pub(crate) async fn cmd_edit(
let app_payload = serialize_edit(&msg_id, new_text.as_bytes())
.context("serialize edit message")?;
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2238,8 +2460,8 @@ pub(crate) async fn cmd_delete(
);
let app_payload = serialize_delete(&msg_id);
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2394,8 +2616,8 @@ pub(crate) async fn cmd_send_file(
"cannot send files in a local-only conversation"
);
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2672,8 +2894,8 @@ pub(crate) async fn do_send(
.context("serialize app message")?;
// Metadata protection: seal sender identity inside payload + pad to bucket size.
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member
.send_message(&padded)
@@ -2762,8 +2984,8 @@ async fn send_dummy_message(
}
let dummy_payload = serialize_dummy();
let sealed = quicproquo_core::sealed_sender::seal(&identity, &dummy_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &dummy_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = match member.send_message(&padded) {
Ok(ct) => ct,
@@ -2845,12 +3067,12 @@ async fn poll_messages(
// Falls back gracefully for messages from older clients.
let (sender_key, app_bytes) = {
// Step 1: try unpad
let after_unpad = quicproquo_core::padding::unpad(&plaintext)
let after_unpad = quicprochat_core::padding::unpad(&plaintext)
.unwrap_or_else(|_| plaintext.clone());
// Step 2: try unseal
if quicproquo_core::sealed_sender::is_sealed(&after_unpad) {
match quicproquo_core::sealed_sender::unseal(&after_unpad) {
if quicprochat_core::sealed_sender::is_sealed(&after_unpad) {
match quicprochat_core::sealed_sender::unseal(&after_unpad) {
Ok((sk, inner)) => (sk.to_vec(), inner),
Err(_) => (my_key.clone(), after_unpad),
}
@@ -3048,8 +3270,8 @@ async fn poll_messages(
if let Some(mid) = msg_id {
let receipt_bytes = serialize_read_receipt(mid);
let identity = Arc::clone(&session.identity);
let sealed = quicproquo_core::sealed_sender::seal(&identity, &receipt_bytes);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &receipt_bytes);
let padded = quicprochat_core::padding::pad(&sealed);
if let Some(m) = session.members.get_mut(conv_id) {
if let Ok(ct) = m.send_message(&padded) {
let _ = enqueue(client, &sender_key, &ct).await;
@@ -3186,8 +3408,9 @@ async fn try_auto_join(
};
let mls_blob = member
.group_ref()
.and_then(|g| bincode::serialize(g).ok());
.serialize_mls_state()
.ok()
.flatten();
let conv = Conversation {
id: conv_id.clone(),

View File

@@ -10,8 +10,8 @@ use rustls::{ClientConfig as RustlsClientConfig, RootCertStore};
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
use capnp_rpc::{rpc_twoparty_capnp::Side, twoparty, RpcSystem};
use quicproquo_core::HybridPublicKey;
use quicproquo_proto::node_capnp::{auth, node_service};
use quicprochat_core::HybridPublicKey;
use quicprochat_proto::node_capnp::{auth, node_service};
use crate::{AUTH_CONTEXT, INSECURE_SKIP_VERIFY};
@@ -440,11 +440,11 @@ pub async fn fetch_hybrid_key(
/// Decrypt a hybrid envelope. Requires a hybrid key; no fallback to plaintext MLS.
pub fn try_hybrid_decrypt(
hybrid_kp: Option<&quicproquo_core::HybridKeypair>,
hybrid_kp: Option<&quicprochat_core::HybridKeypair>,
payload: &[u8],
) -> anyhow::Result<Vec<u8>> {
let kp = hybrid_kp.ok_or_else(|| anyhow::anyhow!("hybrid key required for decryption"))?;
quicproquo_core::hybrid_decrypt(kp, payload, b"", b"").map_err(|e| anyhow::anyhow!("{e}"))
quicprochat_core::hybrid_decrypt(kp, payload, b"", b"").map_err(|e| anyhow::anyhow!("{e}"))
}
/// Peek at queued payloads without removing them.
@@ -701,9 +701,9 @@ pub async fn resolve_user(
.to_vec();
if !proof_bytes.is_empty() {
let proof = quicproquo_kt::InclusionProof::from_bytes(&proof_bytes)
let proof = quicprochat_kt::InclusionProof::from_bytes(&proof_bytes)
.context("resolve_user: inclusion proof deserialise failed")?;
quicproquo_kt::verify_inclusion(&proof, username, &key)
quicprochat_kt::verify_inclusion(&proof, username, &key)
.context("resolve_user: KT inclusion proof verification FAILED — possible key mislabelling")?;
}

View File

@@ -11,12 +11,12 @@ use std::time::Instant;
use anyhow::Context;
use zeroize::Zeroizing;
use quicproquo_core::{DiskKeyStore, GroupMember, HybridKeypair, IdentityKeypair};
use quicprochat_core::{DiskKeyStore, GroupMember, HybridKeypair, IdentityKeypair};
use super::conversation::{
now_ms, Conversation, ConversationId, ConversationKind, ConversationStore,
};
use super::state::{load_or_init_state, keystore_path};
use super::state::load_or_init_state;
/// Runtime state for an interactive REPL session.
pub struct SessionState {
@@ -53,6 +53,9 @@ pub struct SessionState {
pub padding_enabled: bool,
/// Last epoch at which we sent a message (for /verify-fs).
pub last_send_epoch: Option<u64>,
/// P2P node for direct mesh messaging (requires `--features mesh`).
#[cfg(feature = "mesh")]
pub p2p_node: Option<Arc<quicprochat_p2p::P2pNode>>,
}
impl SessionState {
@@ -93,6 +96,8 @@ impl SessionState {
auto_clear_secs: None,
padding_enabled: false,
last_send_epoch: None,
#[cfg(feature = "mesh")]
p2p_node: None,
};
// Migrate legacy single-group into conversations if present and not yet migrated.
@@ -109,7 +114,7 @@ impl SessionState {
/// Migrate the legacy single-group from StoredState into the conversation DB.
fn migrate_legacy_group(
&mut self,
state_path: &Path,
_state_path: &Path,
group_blob: &Option<Vec<u8>>,
) -> anyhow::Result<()> {
let blob = match group_blob {
@@ -117,16 +122,22 @@ impl SessionState {
None => return Ok(()),
};
// Reconstruct GroupMember using the legacy keystore and group blob.
let ks_path = keystore_path(state_path);
let ks = DiskKeyStore::persistent(&ks_path)?;
let group = bincode::deserialize(blob).context("decode legacy group")?;
let member = GroupMember::new_with_state(
// Legacy group blobs used openmls 0.5 serde format. After the 0.8
// upgrade the blob format changed to storage-provider state. Attempt
// to load from the new format; if that fails, skip the legacy group.
let group_id_guess = &blob[..blob.len().min(16)];
let member = match GroupMember::new_from_storage_bytes(
Arc::clone(&self.identity),
ks,
Some(group),
blob,
group_id_guess,
false, // legacy groups are classical
);
) {
Ok(m) => m,
Err(e) => {
tracing::warn!(error = %e, "skipping incompatible legacy group blob (openmls version mismatch)");
return Ok(());
}
};
let group_id_bytes = member.group_id().unwrap_or_default();
@@ -182,27 +193,32 @@ impl SessionState {
/// Create a GroupMember from a stored conversation.
fn create_member_from_conv(&self, conv: &Conversation) -> anyhow::Result<GroupMember> {
if let Some(blob) = conv.mls_group_blob.as_ref() {
let group_id = conv.id.0.as_slice();
let member = GroupMember::new_from_storage_bytes(
Arc::clone(&self.identity),
blob,
group_id,
conv.is_hybrid,
)
.context("restore MLS state from conversation db")?;
Ok(member)
} else {
// No MLS state — create an empty member.
let ks_path = self.keystore_path_for(&conv.id);
let ks = DiskKeyStore::persistent(&ks_path)
.unwrap_or_else(|e| {
tracing::warn!(path = %ks_path.display(), error = %e, "DiskKeyStore open failed, falling back to ephemeral");
DiskKeyStore::ephemeral()
});
let group = conv
.mls_group_blob
.as_ref()
.map(|b| bincode::deserialize(b))
.transpose()
.context("decode MLS group from conversation db")?;
Ok(GroupMember::new_with_state(
Arc::clone(&self.identity),
ks,
group,
None,
conv.is_hybrid,
))
}
}
/// Path for a per-conversation keystore file.
fn keystore_path_for(&self, conv_id: &ConversationId) -> PathBuf {
@@ -214,10 +230,8 @@ impl SessionState {
pub fn save_member(&self, conv_id: &ConversationId) -> anyhow::Result<()> {
let member = self.members.get(conv_id).context("no such conversation")?;
let blob = member
.group_ref()
.map(bincode::serialize)
.transpose()
.context("serialize MLS group")?;
.serialize_mls_state()
.context("serialize MLS state")?;
let member_keys = member.member_identities();

View File

@@ -10,7 +10,7 @@ use chacha20poly1305::{
use rand::RngCore;
use serde::{Deserialize, Serialize};
use quicproquo_core::{DiskKeyStore, GroupMember, HybridKeypair, HybridKeypairBytes, IdentityKeypair};
use quicprochat_core::{DiskKeyStore, GroupMember, HybridKeypair, HybridKeypairBytes, IdentityKeypair};
/// Magic bytes for encrypted client state files.
const STATE_MAGIC: &[u8; 4] = b"QPCE";
@@ -27,18 +27,31 @@ pub struct StoredState {
/// Cached member public keys for group participants.
#[serde(default)]
pub member_keys: Vec<Vec<u8>>,
/// MLS group ID bytes, needed to reload the group from StorageProvider state.
#[serde(default)]
pub group_id: Option<Vec<u8>>,
}
impl StoredState {
pub fn into_parts(self, state_path: &Path) -> anyhow::Result<(GroupMember, Option<HybridKeypair>)> {
let identity = Arc::new(IdentityKeypair::from_seed(self.identity_seed));
let group = self
.group
.map(|bytes| bincode::deserialize(&bytes).context("decode group"))
.transpose()?;
let key_store = DiskKeyStore::persistent(keystore_path(state_path))?;
let hybrid = self.hybrid_key.is_some();
let member = GroupMember::new_with_state(identity, key_store, group, hybrid);
let member = match (self.group.as_ref(), self.group_id.as_ref()) {
(Some(storage_bytes), Some(gid)) => {
GroupMember::new_from_storage_bytes(
identity,
storage_bytes,
gid,
hybrid,
)
.context("restore MLS state from stored state")?
}
_ => {
let key_store = DiskKeyStore::persistent(keystore_path(state_path))?;
GroupMember::new_with_state(identity, key_store, None, hybrid)
}
};
let hybrid_kp = self
.hybrid_key
@@ -50,15 +63,15 @@ impl StoredState {
pub fn from_parts(member: &GroupMember, hybrid_kp: Option<&HybridKeypair>) -> anyhow::Result<Self> {
let group = member
.group_ref()
.map(|g| bincode::serialize(g).context("serialize group"))
.transpose()?;
.serialize_mls_state()
.context("serialize MLS state")?;
Ok(Self {
identity_seed: *member.identity_seed(),
group,
hybrid_key: hybrid_kp.map(|kp| kp.to_bytes()),
member_keys: Vec::new(),
group_id: member.group_id(),
})
}
}
@@ -245,6 +258,7 @@ mod tests {
hybrid_key: None,
group: None,
member_keys: Vec::new(),
group_id: None,
};
let password = "test-password";
let plaintext = bincode::serialize(&state).unwrap();
@@ -268,6 +282,7 @@ mod tests {
}),
group: None,
member_keys: Vec::new(),
group_id: None,
};
let password = "another-password";
let plaintext = bincode::serialize(&state).unwrap();
@@ -285,6 +300,7 @@ mod tests {
hybrid_key: None,
group: None,
member_keys: Vec::new(),
group_id: None,
};
let plaintext = bincode::serialize(&state).unwrap();
let encrypted = encrypt_state("correct", &plaintext).unwrap();

View File

@@ -1,4 +1,4 @@
//! Full-screen Ratatui TUI for quicproquo.
//! Full-screen Ratatui TUI for quicprochat.
//!
//! Layout:
//! ┌──────────────┬──────────────────────────────────────────┐
@@ -48,11 +48,11 @@ use super::session::SessionState;
use super::state::load_or_init_state;
use super::token_cache::{load_cached_session, save_cached_session};
use quicproquo_core::{
use quicprochat_core::{
AppMessage, DiskKeyStore, GroupMember, IdentityKeypair, ReceivedMessage,
hybrid_encrypt, parse as parse_app_msg, serialize_chat,
};
use quicproquo_proto::node_capnp::node_service;
use quicprochat_proto::node_capnp::node_service;
// ── App events ───────────────────────────────────────────────────────────────
@@ -83,6 +83,8 @@ struct App {
channel_names: Vec<String>,
/// Conversation IDs, parallel to `channel_names`.
channel_ids: Vec<ConversationId>,
/// Unread message counts, parallel to `channel_names`.
unread_counts: Vec<u32>,
/// Index of the selected channel in the sidebar.
selected_channel: usize,
/// Messages for the currently active channel.
@@ -102,10 +104,12 @@ impl App {
let convs = session.conv_store.list_conversations()?;
let channel_names: Vec<String> = convs.iter().map(|c| c.display_name.clone()).collect();
let channel_ids: Vec<ConversationId> = convs.iter().map(|c| c.id.clone()).collect();
let unread_counts: Vec<u32> = convs.iter().map(|c| c.unread_count).collect();
Ok(Self {
channel_names,
channel_ids,
unread_counts,
selected_channel: 0,
messages: Vec::new(),
input: String::new(),
@@ -232,14 +236,27 @@ fn draw_sidebar(frame: &mut Frame, app: &App, area: Rect) {
.iter()
.enumerate()
.map(|(i, name)| {
let style = if i == app.selected_channel {
let unread = app.unread_counts.get(i).copied().unwrap_or(0);
let is_selected = i == app.selected_channel;
let label = if unread > 0 && !is_selected {
format!("{name} ({unread})")
} else {
name.clone()
};
let style = if is_selected {
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD | Modifier::REVERSED)
} else if unread > 0 {
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Cyan)
};
ListItem::new(Line::from(Span::styled(name.clone(), style)))
ListItem::new(Line::from(Span::styled(label, style)))
})
.collect();
@@ -393,11 +410,11 @@ async fn poll_task(
match member.receive_message(&mls_payload) {
Ok(ReceivedMessage::Application(plaintext)) => {
let (sender_key, app_bytes) = {
let after_unpad = quicproquo_core::padding::unpad(&plaintext)
let after_unpad = quicprochat_core::padding::unpad(&plaintext)
.unwrap_or_else(|_| plaintext.clone());
if quicproquo_core::sealed_sender::is_sealed(&after_unpad) {
match quicproquo_core::sealed_sender::unseal(&after_unpad) {
if quicprochat_core::sealed_sender::is_sealed(&after_unpad) {
match quicprochat_core::sealed_sender::unseal(&after_unpad) {
Ok((sk, inner)) => (sk.to_vec(), inner),
Err(_) => (my_key.clone(), after_unpad),
}
@@ -493,8 +510,8 @@ async fn send_message(
.context("serialize app message")?;
// Metadata protection: seal + pad.
let sealed = quicproquo_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicproquo_core::padding::pad(&sealed);
let sealed = quicprochat_core::sealed_sender::seal(&identity, &app_payload);
let padded = quicprochat_core::padding::pad(&sealed);
let ct = member.send_message(&padded).context("MLS encrypt")?;
@@ -543,7 +560,7 @@ async fn send_message(
// ── TUI entry point ───────────────────────────────────────────────────────────
/// Entry point for `qpq tui`. Sets up the terminal, runs the event loop, and
/// Entry point for `qpc tui`. Sets up the terminal, runs the event loop, and
/// restores the terminal on exit.
pub async fn run_tui(
state_path: &Path,

View File

@@ -1,10 +1,10 @@
//! v2 REPL — thin shell over `quicproquo_sdk::QpqClient`.
//! v2 REPL — thin shell over `quicprochat_sdk::QpqClient`.
//!
//! Provides an interactive command-line interface with categorized `/help`,
//! tab-completion, and a background event listener. Delegates all crypto,
//! MLS, and RPC work to the SDK.
//!
//! Build: `cargo build -p quicproquo-client --features v2`
//! Build: `cargo build -p quicprochat-client --features v2`
use std::path::PathBuf;
use std::process::{Child, Command as ProcessCommand};
@@ -12,10 +12,10 @@ use std::sync::Arc;
use std::time::Duration;
use anyhow::Context;
use quicproquo_core::{GroupMember, IdentityKeypair};
use quicproquo_sdk::client::QpqClient;
use quicproquo_sdk::conversation::{ConversationId, ConversationKind, StoredMessage};
use quicproquo_sdk::events::ClientEvent;
use quicprochat_core::{GroupMember, IdentityKeypair};
use quicprochat_sdk::client::QpqClient;
use quicprochat_sdk::conversation::{ConversationId, ConversationKind, StoredMessage};
use quicprochat_sdk::events::ClientEvent;
use rustyline::completion::{Completer, Pair};
use rustyline::error::ReadlineError;
use rustyline::highlight::Highlighter;
@@ -100,6 +100,8 @@ const COMMANDS: &[CmdDef] = &[
CmdDef { name: "/help", aliases: &["/?"], category: Category::Utility, description: "Show this help message", usage: "/help" },
CmdDef { name: "/quit", aliases: &["/q", "/exit"], category: Category::Utility, description: "Exit the REPL", usage: "/quit" },
CmdDef { name: "/clear", aliases: &[], category: Category::Utility, description: "Clear the terminal", usage: "/clear" },
CmdDef { name: "/search", aliases: &[], category: Category::Messaging, description: "Search messages across all conversations", usage: "/search <query>" },
CmdDef { name: "/delete-conversation", aliases: &["/delconv"], category: Category::Messaging, description: "Delete a conversation and its messages", usage: "/delete-conversation [name]" },
CmdDef { name: "/health", aliases: &[], category: Category::Debug, description: "Check server connection health", usage: "/health" },
CmdDef { name: "/status", aliases: &[], category: Category::Debug, description: "Show connection and auth state", usage: "/status" },
];
@@ -216,14 +218,14 @@ impl Drop for ServerGuard {
fn find_server_binary() -> Option<PathBuf> {
if let Ok(exe) = std::env::current_exe() {
let sibling = exe.with_file_name("qpq-server");
let sibling = exe.with_file_name("qpc-server");
if sibling.exists() {
return Some(sibling);
}
}
std::env::var_os("PATH").and_then(|paths| {
std::env::split_paths(&paths)
.map(|dir| dir.join("qpq-server"))
.map(|dir| dir.join("qpc-server"))
.find(|p| p.exists())
})
}
@@ -235,7 +237,7 @@ async fn auto_start_server(addr: &str) -> ServerGuard {
let binary = match find_server_binary() {
Some(b) => b,
None => {
display::print_status("server not reachable and qpq-server binary not found");
display::print_status("server not reachable and qpc-server binary not found");
return ServerGuard(None);
}
};
@@ -294,6 +296,15 @@ fn show_event(event: &ClientEvent) {
};
display::print_incoming(&sender, body);
}
ClientEvent::Connected => {
display::print_status("connected to server");
}
ClientEvent::Disconnected { reason } => {
display::print_error(&format!("disconnected: {reason}"));
}
ClientEvent::Reconnecting { attempt } => {
display::print_status(&format!("reconnecting... (attempt {attempt})"));
}
ClientEvent::ConversationCreated { display_name, .. } => {
display::print_status(&format!("new conversation: {display_name}"));
}
@@ -311,7 +322,7 @@ fn show_event(event: &ClientEvent) {
// ── Help ────────────────────────────────────────────────────────────────────
fn print_help() {
println!("\n{BOLD}quicproquo v2 REPL{RESET}\n");
println!("\n{BOLD}quicprochat v2 REPL{RESET}\n");
for cat in Category::all() {
println!("{BOLD}{}{RESET}", cat.label());
for cmd in COMMANDS.iter().filter(|c| c.category == *cat) {
@@ -388,6 +399,8 @@ async fn dispatch(
"/switch" | "/sw" => do_switch(client, st, args)?,
"/group" | "/g" => do_group(client, st, args).await?,
"/devices" => do_devices(client, args).await?,
"/search" => do_search(client, args)?,
"/delete-conversation" | "/delconv" => do_delete_conversation(client, st, args)?,
_ => display::print_error(&format!("unknown command: {cmd} (try /help)")),
}
Ok(false)
@@ -397,7 +410,7 @@ async fn dispatch(
fn do_status(client: &QpqClient, st: &ReplState) {
println!("{BOLD}Status{RESET}");
println!(" connected: {}", if client.is_connected() { "yes" } else { "no" });
println!(" connection: {}", client.connection_state());
println!(" authenticated: {}", if client.is_authenticated() { "yes" } else { "no" });
println!(" username: {}", client.username().unwrap_or("(none)"));
println!(" conversation: {}", st.current_display_name.as_deref().unwrap_or("(none)"));
@@ -462,14 +475,14 @@ async fn do_login(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
// Try to load identity keypair from state file.
let state_path = &client.config_state_path();
if state_path.exists() {
match quicproquo_sdk::state::load_state(state_path, Some(pass)) {
match quicprochat_sdk::state::load_state(state_path, Some(pass)) {
Ok(stored) => {
let kp = IdentityKeypair::from_seed(stored.identity_seed);
st.identity = Some(Arc::new(kp));
}
Err(_) => {
// Try without password (unencrypted state).
if let Ok(stored) = quicproquo_sdk::state::load_state(state_path, None) {
if let Ok(stored) = quicprochat_sdk::state::load_state(state_path, None) {
let kp = IdentityKeypair::from_seed(stored.identity_seed);
st.identity = Some(Arc::new(kp));
}
@@ -493,7 +506,7 @@ async fn do_resolve(client: &QpqClient, args: &str) -> anyhow::Result<()> {
return Ok(());
}
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
match quicproquo_sdk::users::resolve_user(rpc, name).await? {
match quicprochat_sdk::users::resolve_user(rpc, name).await? {
Some(key) => println!(" {name} -> {}", hex::encode(&key)),
None => display::print_error(&format!("user '{name}' not found")),
}
@@ -510,7 +523,7 @@ async fn do_safety(client: &QpqClient, st: &ReplState, args: &str) -> anyhow::Re
let my_key = identity.public_key_bytes();
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let peer_key = quicproquo_sdk::users::resolve_user(rpc, name)
let peer_key = quicprochat_sdk::users::resolve_user(rpc, name)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{name}' not found"))?;
if peer_key.len() != 32 {
@@ -519,7 +532,7 @@ async fn do_safety(client: &QpqClient, st: &ReplState, args: &str) -> anyhow::Re
let mut peer_arr = [0u8; 32];
peer_arr.copy_from_slice(&peer_key);
let sn = quicproquo_core::compute_safety_number(&my_key, &peer_arr);
let sn = quicprochat_core::compute_safety_number(&my_key, &peer_arr);
println!("\n{BOLD}Safety number with {name}:{RESET}");
println!(" {sn}\n");
println!("{DIM}Compare with {name} over a trusted channel.{RESET}");
@@ -536,7 +549,7 @@ async fn do_refresh_key(client: &QpqClient, st: &ReplState) -> anyhow::Result<()
.map_err(|e| anyhow::anyhow!("generate key package: {e}"))?;
let pub_key = identity.public_key_bytes();
let fp = quicproquo_sdk::keys::upload_key_package(rpc, &pub_key, &kp_bytes).await?;
let fp = quicprochat_sdk::keys::upload_key_package(rpc, &pub_key, &kp_bytes).await?;
display::print_status(&format!(
"KeyPackage uploaded (fp: {})",
hex::encode(&fp[..8.min(fp.len())])
@@ -554,7 +567,7 @@ async fn do_dm(client: &mut QpqClient, st: &mut ReplState, args: &str) -> anyhow
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let peer_key = quicproquo_sdk::users::resolve_user(rpc, username)
let peer_key = quicprochat_sdk::users::resolve_user(rpc, username)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{username}' not found"))?;
@@ -565,13 +578,13 @@ async fn do_dm(client: &mut QpqClient, st: &mut ReplState, args: &str) -> anyhow
return Ok(());
}
let peer_kp = quicproquo_sdk::keys::fetch_key_package(rpc, &peer_key)
let peer_kp = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("peer has no available KeyPackage"))?;
let mut member = GroupMember::new(Arc::clone(&identity));
let (conv_id, was_new) = quicproquo_sdk::groups::create_dm(
let (conv_id, was_new) = quicprochat_sdk::groups::create_dm(
rpc, conv_store, &mut member, &identity,
&peer_key, &peer_kp, None, None,
).await?;
@@ -599,7 +612,7 @@ async fn do_send(client: &QpqClient, st: &ReplState, msg: &str) -> anyhow::Resul
.load_conversation(conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let my_pub = identity.public_key_bytes();
let recipients: Vec<Vec<u8>> = conv
@@ -614,13 +627,13 @@ async fn do_send(client: &QpqClient, st: &ReplState, msg: &str) -> anyhow::Resul
}
let hybrid_keys = vec![None; recipients.len()];
quicproquo_sdk::messaging::send_message(
quicprochat_sdk::messaging::send_message(
rpc, &mut member, &identity, msg, &recipients, &hybrid_keys, conv_id.0.as_slice(),
).await?;
quicproquo_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
quicprochat_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
let now = quicproquo_sdk::conversation::now_ms();
let now = quicprochat_sdk::conversation::now_ms();
conv_store.save_message(&StoredMessage {
conversation_id: conv_id.clone(),
message_id: None,
@@ -647,10 +660,10 @@ async fn do_recv(client: &QpqClient, st: &ReplState) -> anyhow::Result<()> {
.load_conversation(conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let my_pub = identity.public_key_bytes();
let messages = quicproquo_sdk::messaging::receive_messages(
let messages = quicprochat_sdk::messaging::receive_messages(
rpc, &mut member, &my_pub, None, conv_id.0.as_slice(), &[],
).await?;
@@ -659,10 +672,10 @@ async fn do_recv(client: &QpqClient, st: &ReplState) -> anyhow::Result<()> {
return Ok(());
}
quicproquo_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
quicprochat_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
for m in &messages {
let sender_name = quicproquo_sdk::users::resolve_identity(rpc, &m.sender_key)
let sender_name = quicprochat_sdk::users::resolve_identity(rpc, &m.sender_key)
.await
.ok()
.flatten();
@@ -670,13 +683,13 @@ async fn do_recv(client: &QpqClient, st: &ReplState) -> anyhow::Result<()> {
let sender = sender_name.as_deref().unwrap_or(&sender_hex);
let body = match &m.message {
quicproquo_core::AppMessage::Chat { body, .. } => {
quicprochat_core::AppMessage::Chat { body, .. } => {
String::from_utf8_lossy(body).to_string()
}
other => format!("{other:?}"),
};
let now = quicproquo_sdk::conversation::now_ms();
let now = quicprochat_sdk::conversation::now_ms();
println!("{DIM}[{}]{RESET} {CYAN}{BOLD}{sender}{RESET}: {body}", ts(now));
conv_store.save_message(&StoredMessage {
@@ -772,7 +785,7 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let identity = st.require_identity()?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let mut member = GroupMember::new(Arc::clone(&identity));
let conv_id = quicproquo_sdk::groups::create_group(conv_store, &mut member, name)?;
let conv_id = quicprochat_sdk::groups::create_group(conv_store, &mut member, name)?;
st.set_conversation(conv_id, format!("#{name}"));
display::print_status(&format!("group #{name} created"));
}
@@ -788,10 +801,10 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let peer_key = quicproquo_sdk::users::resolve_user(rpc, user)
let peer_key = quicprochat_sdk::users::resolve_user(rpc, user)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{user}' not found"))?;
let peer_kp = quicproquo_sdk::keys::fetch_key_package(rpc, &peer_key)
let peer_kp = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("peer has no KeyPackage"))?;
@@ -799,9 +812,9 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("group '{group}' not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
quicproquo_sdk::groups::invite_to_group(
quicprochat_sdk::groups::invite_to_group(
rpc, conv_store, &mut member, &identity,
&conv_id, &peer_key, &peer_kp, None, None,
).await?;
@@ -816,8 +829,8 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
quicproquo_sdk::groups::leave_group(rpc, conv_store, &mut member, &conv_id).await?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
quicprochat_sdk::groups::leave_group(rpc, conv_store, &mut member, &conv_id).await?;
display::print_status("left group");
}
@@ -834,7 +847,7 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
for key in &conv.member_keys {
let short = hex::encode(&key[..4.min(key.len())]);
if let Ok(rpc) = client.rpc() {
if let Ok(Some(n)) = quicproquo_sdk::users::resolve_identity(rpc, key).await {
if let Ok(Some(n)) = quicprochat_sdk::users::resolve_identity(rpc, key).await {
println!(" @{n} {DIM}({short}){RESET}");
continue;
}
@@ -855,14 +868,14 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let peer_key = quicproquo_sdk::users::resolve_user(rpc, user)
let peer_key = quicprochat_sdk::users::resolve_user(rpc, user)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{user}' not found"))?;
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
quicproquo_sdk::groups::remove_member_from_group(
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
quicprochat_sdk::groups::remove_member_from_group(
rpc, conv_store, &mut member, &conv_id, &peer_key,
).await?;
display::print_status(&format!("removed @{user} from group"));
@@ -877,7 +890,7 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let conv_id = st.require_conversation()?.clone();
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
quicproquo_sdk::groups::set_group_metadata(
quicprochat_sdk::groups::set_group_metadata(
rpc, conv_store, &conv_id, new_name, "", &[],
).await?;
st.set_conversation(conv_id, format!("#{new_name}"));
@@ -892,8 +905,8 @@ async fn do_group(client: &mut QpqClient, st: &mut ReplState, args: &str) -> any
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicproquo_sdk::groups::restore_mls_state(&conv, &identity)?;
quicproquo_sdk::groups::rotate_group_keys(rpc, conv_store, &mut member, &conv_id).await?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
quicprochat_sdk::groups::rotate_group_keys(rpc, conv_store, &mut member, &conv_id).await?;
display::print_status("group keys rotated");
}
@@ -911,7 +924,7 @@ async fn do_devices(client: &mut QpqClient, args: &str) -> anyhow::Result<()> {
match sub {
"list" => {
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let devices = quicproquo_sdk::devices::list_devices(rpc).await?;
let devices = quicprochat_sdk::devices::list_devices(rpc).await?;
if devices.is_empty() {
display::print_status("no devices registered");
} else {
@@ -941,7 +954,7 @@ async fn do_devices(client: &mut QpqClient, args: &str) -> anyhow::Result<()> {
let mut dev_id = vec![0u8; 16];
rand::rngs::OsRng.fill_bytes(&mut dev_id);
let was_new =
quicproquo_sdk::devices::register_device(rpc, &dev_id, name).await?;
quicprochat_sdk::devices::register_device(rpc, &dev_id, name).await?;
if was_new {
display::print_status(&format!(
"device registered: {name} (id: {})",
@@ -961,7 +974,7 @@ async fn do_devices(client: &mut QpqClient, args: &str) -> anyhow::Result<()> {
let id_bytes = hex::decode(id_hex)
.map_err(|e| anyhow::anyhow!("invalid device_id hex: {e}"))?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let revoked = quicproquo_sdk::devices::revoke_device(rpc, &id_bytes).await?;
let revoked = quicprochat_sdk::devices::revoke_device(rpc, &id_bytes).await?;
if revoked {
display::print_status(&format!("device revoked: {id_hex}"));
} else {
@@ -974,6 +987,81 @@ async fn do_devices(client: &mut QpqClient, args: &str) -> anyhow::Result<()> {
Ok(())
}
// ── Search ──────────────────────────────────────────────────────────────────
fn do_search(client: &QpqClient, args: &str) -> anyhow::Result<()> {
let query = args.trim();
if query.is_empty() {
display::print_error("usage: /search <query>");
return Ok(());
}
let results = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.search_messages(query, 25)?;
if results.is_empty() {
display::print_status(&format!("no messages matching \"{query}\""));
return Ok(());
}
println!("\n{BOLD}Search results for \"{query}\"{RESET} ({} matches)\n", results.len());
for r in &results {
let ts = format_timestamp_ms(r.timestamp_ms);
let sender = r.sender_name.as_deref().unwrap_or("?");
println!(
" {DIM}[{ts}]{RESET} {CYAN}{}{RESET} > {GREEN}{sender}{RESET}: {}",
r.conversation_name,
r.body,
);
}
println!();
Ok(())
}
fn format_timestamp_ms(ms: u64) -> String {
let secs = ms / 1000;
let hours = (secs % 86400) / 3600;
let minutes = (secs % 3600) / 60;
format!("{hours:02}:{minutes:02}")
}
// ── Delete conversation ─────────────────────────────────────────────────────
fn do_delete_conversation(
client: &QpqClient,
st: &mut ReplState,
args: &str,
) -> anyhow::Result<()> {
let name = args.trim();
// Find by name, or use current conversation.
let target = if name.is_empty() {
st.current_conversation.clone()
} else {
let convs = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.list_conversations()?;
convs
.iter()
.find(|c| c.display_name.eq_ignore_ascii_case(name))
.map(|c| c.id.clone())
};
let Some(conv_id) = target else {
display::print_error("no matching conversation (specify name or switch first)");
return Ok(());
};
let deleted = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?.delete_conversation(&conv_id)?;
if deleted {
// If we deleted the active conversation, clear it.
if st.current_conversation.as_ref() == Some(&conv_id) {
st.current_conversation = None;
st.current_display_name = None;
}
display::print_status("conversation deleted");
} else {
display::print_error("conversation not found");
}
Ok(())
}
// ── Entry point ─────────────────────────────────────────────────────────────
/// Run the v2 REPL over a `QpqClient`.
@@ -990,6 +1078,9 @@ pub async fn run_v2_repl(
// Connect to server.
client.connect().await.context("connect to server")?;
// Start heartbeat for proactive dead-connection detection.
client.start_heartbeat();
// Background event listener.
let rx = client.subscribe();
spawn_event_listener(rx);
@@ -1004,8 +1095,8 @@ pub async fn run_v2_repl(
// Load identity from state.
let state_path = client.config_state_path();
if state_path.exists() {
if let Ok(stored) = quicproquo_sdk::state::load_state(&state_path, Some(pass))
.or_else(|_| quicproquo_sdk::state::load_state(&state_path, None))
if let Ok(stored) = quicprochat_sdk::state::load_state(&state_path, Some(pass))
.or_else(|_| quicprochat_sdk::state::load_state(&state_path, None))
{
let kp = IdentityKeypair::from_seed(stored.identity_seed);
st.identity = Some(Arc::new(kp));
@@ -1016,7 +1107,7 @@ pub async fn run_v2_repl(
}
}
println!("\n{BOLD}quicproquo v2 REPL{RESET}");
println!("\n{BOLD}quicprochat v2 REPL{RESET}");
println!("{DIM}Type /help for commands, /quit to exit.{RESET}\n");
if let Some(u) = client.username() {
display::print_status(&format!("authenticated as {u}"));

View File

@@ -1,4 +1,4 @@
//! Full-screen Ratatui TUI for quicproquo v2, driven by the SDK event system.
//! Full-screen Ratatui TUI for quicprochat v2, driven by the SDK event system.
//!
//! Layout:
//! +-- Conversations -+-- Messages ------------------------------+
@@ -20,6 +20,8 @@
//! Ctrl+C / Ctrl+Q -- quit
//!
//! Feature gate: requires both `v2` and `tui` features.
//!
//! Messages are sent via the SDK's MLS encryption pipeline (sealed sender + hybrid wrap).
use std::time::Duration;
@@ -38,9 +40,12 @@ use ratatui::{
};
use tokio::sync::broadcast;
use quicproquo_sdk::client::QpqClient;
use quicproquo_sdk::conversation::ConversationStore;
use quicproquo_sdk::events::ClientEvent;
use std::sync::Arc;
use quicprochat_core::IdentityKeypair;
use quicprochat_sdk::client::{ConnectionState, QpqClient};
use quicprochat_sdk::conversation::{ConversationId, ConversationStore, StoredMessage};
use quicprochat_sdk::events::ClientEvent;
// ── Data Types ──────────────────────────────────────────────────────────────
@@ -84,10 +89,12 @@ pub struct TuiApp {
focus: Focus,
/// Notification line (shown briefly, e.g. "Message sent", "Error: ...").
notification: Option<String>,
/// Whether the client is currently connected.
connected: bool,
/// Current connection state.
conn_state: quicprochat_sdk::client::ConnectionState,
/// Current MLS epoch for the active conversation (if available).
mls_epoch: Option<u64>,
/// Identity keypair for MLS operations (set after login).
identity: Option<Arc<IdentityKeypair>>,
}
impl TuiApp {
@@ -105,8 +112,9 @@ impl TuiApp {
server_addr: server_addr.to_string(),
focus: Focus::Input,
notification: None,
connected: false,
conn_state: ConnectionState::Disconnected,
mls_epoch: None,
identity: None,
}
}
@@ -146,7 +154,15 @@ impl TuiApp {
}
fn update_status(&mut self) {
let conn_indicator = if self.connected { "Online" } else { "Offline" };
let conn_indicator = match self.conn_state {
ConnectionState::Connected => "Connected",
ConnectionState::Reconnecting { attempt } => {
// We can't use format! in a match arm and return &str,
// so we'll handle this below.
return self.update_status_reconnecting(attempt);
}
ConnectionState::Disconnected => "Offline",
};
let user = self
.username
.as_deref()
@@ -164,6 +180,25 @@ impl TuiApp {
if conv_count == 1 { "" } else { "s" }
);
}
fn update_status_reconnecting(&mut self, attempt: u32) {
let user = self
.username
.as_deref()
.unwrap_or("not logged in");
let conv_count = self.conversations.len();
let epoch_str = match self.mls_epoch {
Some(e) => format!("epoch {e}"),
None => "epoch --".to_string(),
};
self.status_line = format!(
"Reconnecting... (attempt {attempt}) | {} | {} | {} conversation{} | MLS {epoch_str}",
self.server_addr,
user,
conv_count,
if conv_count == 1 { "" } else { "s" }
);
}
}
// ── Terminal Drop Guard ─────────────────────────────────────────────────────
@@ -195,7 +230,7 @@ pub async fn run_v2_tui(client: &mut QpqClient) -> anyhow::Result<()> {
"disconnected"
};
let mut app = TuiApp::new(server_addr);
app.connected = client.is_connected();
app.conn_state = client.connection_state();
// Populate initial state from client.
if let Some(name) = client.username() {
@@ -222,6 +257,9 @@ pub async fn run_v2_tui(client: &mut QpqClient) -> anyhow::Result<()> {
app.update_status();
// Start heartbeat for proactive dead-connection detection.
client.start_heartbeat();
// Subscribe to SDK events.
let mut event_rx = client.subscribe();
@@ -275,15 +313,20 @@ pub async fn run_v2_tui(client: &mut QpqClient) -> anyhow::Result<()> {
fn handle_sdk_event(app: &mut TuiApp, event: ClientEvent) {
match event {
ClientEvent::Connected => {
app.connected = true;
app.conn_state = ConnectionState::Connected;
app.notification = Some("Connected to server".to_string());
app.update_status();
}
ClientEvent::Disconnected { reason } => {
app.connected = false;
app.conn_state = ConnectionState::Disconnected;
app.notification = Some(format!("Disconnected: {reason}"));
app.update_status();
}
ClientEvent::Reconnecting { attempt } => {
app.conn_state = ConnectionState::Reconnecting { attempt };
app.notification = Some(format!("Reconnecting... (attempt {attempt})"));
app.update_status();
}
ClientEvent::Registered { username } => {
app.notification = Some(format!("Registered as {username}"));
}
@@ -535,10 +578,81 @@ async fn handle_input(app: &mut TuiApp, client: &mut QpqClient, text: &str) {
// Snap to bottom.
app.scroll_offset = 0;
// TODO: actually send via SDK when the send pipeline is wired up.
// For now, emit a notification.
app.notification = Some(format!("Sent: {text}"));
// Send via MLS encryption pipeline.
let conv_id_bytes = *app.active_conv_id().unwrap();
let conv_id = ConversationId(conv_id_bytes);
let send_result = send_tui_message(client, app, &conv_id, text).await;
match send_result {
Ok(()) => {
app.notification = Some("Sent".to_string());
}
Err(e) => {
app.notification = Some(format!("Send failed: {e}"));
}
}
}
}
/// Send a message via the SDK's MLS encryption pipeline.
async fn send_tui_message(
client: &QpqClient,
app: &TuiApp,
conv_id: &ConversationId,
text: &str,
) -> anyhow::Result<()> {
let identity = app
.identity
.as_ref()
.ok_or_else(|| anyhow::anyhow!("not logged in — identity not loaded"))?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv = conv_store
.load_conversation(conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, identity)?;
let my_pub = identity.public_key_bytes();
let recipients: Vec<Vec<u8>> = conv
.member_keys
.iter()
.filter(|k| k.as_slice() != my_pub.as_slice())
.cloned()
.collect();
if recipients.is_empty() {
return Err(anyhow::anyhow!("no recipients in conversation"));
}
let hybrid_keys = vec![None; recipients.len()];
quicprochat_sdk::messaging::send_message(
rpc,
&mut member,
identity,
text,
&recipients,
&hybrid_keys,
conv_id.0.as_slice(),
)
.await?;
quicprochat_sdk::groups::save_mls_state(conv_store, conv_id, &member)?;
let now = quicprochat_sdk::conversation::now_ms();
conv_store.save_message(&StoredMessage {
conversation_id: conv_id.clone(),
message_id: None,
sender_key: my_pub.to_vec(),
sender_name: client.username().map(|s| s.to_string()),
body: text.to_string(),
msg_type: "chat".to_string(),
ref_msg_id: None,
timestamp_ms: now,
is_outgoing: true,
})?;
Ok(())
}
/// Handle a /command.
@@ -824,7 +938,7 @@ fn draw_input(frame: &mut Frame, app: &TuiApp, area: Rect) {
frame.render_widget(input_text, area);
// Position cursor in the input area.
if !app.input.is_empty() || true {
if !app.input.is_empty() {
let cursor_x = area.x + 1 + app.input_cursor as u16;
let cursor_y = area.y + 1;
if cursor_x < area.x + area.width - 1 {
@@ -834,12 +948,11 @@ fn draw_input(frame: &mut Frame, app: &TuiApp, area: Rect) {
}
fn draw_status(frame: &mut Frame, app: &TuiApp, area: Rect) {
let conn_color = if app.connected {
Color::Green
} else {
Color::Red
let (conn_color, conn_indicator) = match app.conn_state {
ConnectionState::Connected => (Color::Green, " ON "),
ConnectionState::Reconnecting { .. } => (Color::Yellow, " ... "),
ConnectionState::Disconnected => (Color::Red, " OFF "),
};
let conn_indicator = if app.connected { " ON " } else { " OFF " };
let spans = vec![
Span::styled(
@@ -859,7 +972,7 @@ fn draw_status(frame: &mut Frame, app: &TuiApp, area: Rect) {
fn draw_help(frame: &mut Frame, area: Rect) {
let help_text = vec![
Line::from(Span::styled(
" quicproquo TUI -- Help",
" quicprochat TUI -- Help",
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD),
@@ -954,7 +1067,7 @@ fn load_messages_for_selected(app: &mut TuiApp, client: &QpqClient) {
};
let sdk_conv_id =
quicproquo_sdk::conversation::ConversationId::from_slice(&conv_id);
quicprochat_sdk::conversation::ConversationId::from_slice(&conv_id);
let sdk_conv_id = match sdk_conv_id {
Some(id) => id,
None => return,
@@ -1014,7 +1127,7 @@ mod tests {
fn make_app() -> TuiApp {
let mut app = TuiApp::new("127.0.0.1:7000");
app.connected = true;
app.conn_state = ConnectionState::Connected;
app.username = Some("alice".to_string());
app.conversations.push(ConversationItem {
id: [1u8; 16],
@@ -1062,12 +1175,12 @@ mod tests {
}
#[test]
fn status_bar_shows_online() {
fn status_bar_shows_connected() {
let mut app = TuiApp::new("127.0.0.1:7000");
app.connected = true;
app.conn_state = ConnectionState::Connected;
app.username = Some("alice".to_string());
app.update_status();
assert!(app.status_line.contains("Online"));
assert!(app.status_line.contains("Connected"));
assert!(app.status_line.contains("alice"));
assert!(app.status_line.contains("MLS epoch --"));
}
@@ -1075,15 +1188,32 @@ mod tests {
#[test]
fn status_bar_shows_offline() {
let mut app = TuiApp::new("127.0.0.1:7000");
app.connected = false;
app.conn_state = ConnectionState::Disconnected;
app.update_status();
assert!(app.status_line.contains("Offline"));
}
#[test]
fn status_bar_shows_reconnecting() {
let mut app = TuiApp::new("127.0.0.1:7000");
app.conn_state = ConnectionState::Reconnecting { attempt: 2 };
app.update_status();
assert!(
app.status_line.contains("Reconnecting"),
"expected Reconnecting in: {}",
app.status_line
);
assert!(
app.status_line.contains("attempt 2"),
"expected attempt count in: {}",
app.status_line
);
}
#[test]
fn status_bar_shows_epoch() {
let mut app = TuiApp::new("127.0.0.1:7000");
app.connected = true;
app.conn_state = ConnectionState::Connected;
app.mls_epoch = Some(42);
app.update_status();
assert!(app.status_line.contains("MLS epoch 42"));

View File

@@ -1,17 +1,17 @@
//! quicproquo CLI client library.
//! quicprochat CLI client library.
//!
//! # KeyPackage expiry and refresh
//!
//! KeyPackages are single-use (consumed when someone fetches them for an invite) and the server
//! may enforce a TTL (e.g. 24 hours). To stay invitable, run `qpq refresh-keypackage`
//! may enforce a TTL (e.g. 24 hours). To stay invitable, run `qpc refresh-keypackage`
//! periodically (e.g. before the server TTL) or after your KeyPackage was consumed:
//!
//! ```bash
//! qpq refresh-keypackage --state qpq-state.bin --server 127.0.0.1:7000
//! qpc refresh-keypackage --state qpc-state.bin --server 127.0.0.1:7000
//! ```
//!
//! Use the same `--access-token` (or `QPQ_ACCESS_TOKEN`) as for other authenticated
//! commands. See the [running-the-client](https://docs.quicproquo.dev/getting-started/running-the-client)
//! commands. See the [running-the-client](https://docs.quicprochat.dev/getting-started/running-the-client)
//! docs for details.
use std::sync::RwLock;

View File

@@ -1,4 +1,4 @@
//! quicproquo CLI client.
//! quicprochat CLI client.
// ── v2 feature gate: when compiled with --features v2, use the SDK-based CLI.
#[cfg(feature = "v2")]
@@ -19,21 +19,168 @@ use anyhow::Context;
#[cfg(not(feature = "v2"))]
use clap::{Parser, Subcommand};
#[cfg(not(feature = "v2"))]
use quicproquo_client::{
use quicprochat_client::{
cmd_chat, cmd_check_key, cmd_create_group, cmd_demo_group, cmd_export, cmd_export_verify,
cmd_fetch_key, cmd_health, cmd_invite, cmd_join, cmd_login, cmd_ping, cmd_recv, cmd_register,
cmd_register_state, cmd_refresh_keypackage, cmd_register_user, cmd_send, cmd_whoami,
init_auth, run_repl, set_insecure_skip_verify, ClientAuth,
};
#[cfg(all(feature = "tui", not(feature = "v2")))]
use quicproquo_client::client::tui::run_tui;
use quicprochat_client::client::tui::run_tui;
// ── Config file loading ──────────────────────────────────────────────────────
//
// Loads a TOML config file and sets QPQ_* environment variables for values
// not already set. This runs BEFORE clap parses, so the natural precedence is:
// CLI flags > environment variables > config file > compiled defaults.
//
// Config file search order:
// 1. --config <path> (parsed manually from argv)
// 2. $QPC_CONFIG env var
// 3. $XDG_CONFIG_HOME/qpc/config.toml (usually ~/.config/qpc/config.toml)
// 4. ~/.qpc.toml
#[cfg(not(feature = "v2"))]
mod client_config {
use serde::Deserialize;
use std::path::PathBuf;
#[derive(Debug, Default, Deserialize)]
pub struct ClientFileConfig {
pub server: Option<String>,
pub server_name: Option<String>,
pub ca_cert: Option<String>,
pub username: Option<String>,
pub password: Option<String>,
pub access_token: Option<String>,
pub device_id: Option<String>,
pub state_password: Option<String>,
pub state: Option<String>,
pub danger_accept_invalid_certs: Option<bool>,
pub no_server: Option<bool>,
}
/// Find and load the config file. Returns the parsed config (or default if
/// no file is found).
pub fn load_client_config() -> ClientFileConfig {
let path = find_config_path();
let path = match path {
Some(p) if p.exists() => p,
_ => return ClientFileConfig::default(),
};
match std::fs::read_to_string(&path) {
Ok(contents) => match toml::from_str(&contents) {
Ok(cfg) => {
eprintln!("Loaded config: {}", path.display());
cfg
}
Err(e) => {
eprintln!("Warning: failed to parse {}: {e}", path.display());
ClientFileConfig::default()
}
},
Err(e) => {
eprintln!("Warning: failed to read {}: {e}", path.display());
ClientFileConfig::default()
}
}
}
fn find_config_path() -> Option<PathBuf> {
// 1. --config <path> from argv (before clap parses).
let args: Vec<String> = std::env::args().collect();
for i in 0..args.len().saturating_sub(1) {
if args[i] == "--config" || args[i] == "-c" {
return Some(PathBuf::from(&args[i + 1]));
}
}
// 2. $QPC_CONFIG env var.
if let Ok(p) = std::env::var("QPC_CONFIG") {
return Some(PathBuf::from(p));
}
// 3. $XDG_CONFIG_HOME/qpc/config.toml
let xdg = std::env::var("XDG_CONFIG_HOME")
.map(PathBuf::from)
.unwrap_or_else(|_| {
let home = std::env::var("HOME").unwrap_or_else(|_| ".".to_string());
PathBuf::from(home).join(".config")
});
let xdg_path = xdg.join("qpc").join("config.toml");
if xdg_path.exists() {
return Some(xdg_path);
}
// 4. ~/.qpc.toml
if let Ok(home) = std::env::var("HOME") {
let home_path = PathBuf::from(home).join(".qpc.toml");
if home_path.exists() {
return Some(home_path);
}
}
None
}
/// Set QPQ_* env vars from config values, but only if they're not already set.
pub fn apply_config_to_env(cfg: &ClientFileConfig) {
fn set_if_empty(key: &str, val: &str) {
if std::env::var(key).is_err() {
std::env::set_var(key, val);
}
}
if let Some(ref v) = cfg.server {
set_if_empty("QPQ_SERVER", v);
}
if let Some(ref v) = cfg.server_name {
set_if_empty("QPQ_SERVER_NAME", v);
}
if let Some(ref v) = cfg.ca_cert {
set_if_empty("QPQ_CA_CERT", v);
}
if let Some(ref v) = cfg.username {
set_if_empty("QPQ_USERNAME", v);
}
if let Some(ref v) = cfg.password {
set_if_empty("QPQ_PASSWORD", v);
}
if let Some(ref v) = cfg.access_token {
set_if_empty("QPQ_ACCESS_TOKEN", v);
}
if let Some(ref v) = cfg.device_id {
set_if_empty("QPQ_DEVICE_ID", v);
}
if let Some(ref v) = cfg.state_password {
set_if_empty("QPQ_STATE_PASSWORD", v);
}
if let Some(ref v) = cfg.state {
set_if_empty("QPQ_STATE", v);
}
if let Some(v) = cfg.danger_accept_invalid_certs {
if v {
set_if_empty("QPQ_DANGER_ACCEPT_INVALID_CERTS", "true");
}
}
if let Some(v) = cfg.no_server {
if v {
set_if_empty("QPQ_NO_SERVER", "true");
}
}
}
}
// ── CLI ───────────────────────────────────────────────────────────────────────
#[cfg(not(feature = "v2"))]
#[derive(Debug, Parser)]
#[command(name = "qpq", about = "quicproquo CLI client", version)]
#[command(name = "qpc", about = "quicprochat CLI client", version)]
struct Args {
/// Path to a TOML config file (auto-detected from ~/.config/qpc/config.toml or ~/.qpc.toml).
#[arg(long, short = 'c', global = true, env = "QPC_CONFIG")]
config: Option<PathBuf>,
/// Path to the server's TLS certificate (self-signed by default).
#[arg(
long,
@@ -82,7 +229,7 @@ struct Args {
// ── Default-repl args (used when no subcommand is given) ─────────
/// State file path (identity + MLS state). Used when running the default REPL.
#[arg(long, default_value = "qpq-state.bin", env = "QPQ_STATE")]
#[arg(long, default_value = "qpc-state.bin", env = "QPQ_STATE")]
state: PathBuf,
/// Server address (host:port). Used when running the default REPL.
@@ -97,7 +244,7 @@ struct Args {
#[arg(long, env = "QPQ_PASSWORD")]
password: Option<String>,
/// Do not auto-start a local qpq-server (useful when connecting to a remote server).
/// Do not auto-start a local qpc-server (useful when connecting to a remote server).
#[arg(long, env = "QPQ_NO_SERVER")]
no_server: bool,
@@ -144,7 +291,7 @@ enum Command {
/// State file path (identity + MLS state).
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -203,7 +350,7 @@ enum Command {
/// State file path (identity + MLS state).
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -219,7 +366,7 @@ enum Command {
/// State file path (identity + MLS state).
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -234,7 +381,7 @@ enum Command {
/// State file path (identity + MLS state).
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -252,7 +399,7 @@ enum Command {
Invite {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -267,7 +414,7 @@ enum Command {
Join {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -279,7 +426,7 @@ enum Command {
Send {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -300,7 +447,7 @@ enum Command {
Recv {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -321,7 +468,7 @@ enum Command {
Repl {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -333,7 +480,7 @@ enum Command {
/// OPAQUE password (prompted securely if --username is set but --password is not).
#[arg(long, env = "QPQ_PASSWORD")]
password: Option<String>,
/// Do not auto-start a local qpq-server.
/// Do not auto-start a local qpc-server.
#[arg(long, env = "QPQ_NO_SERVER")]
no_server: bool,
},
@@ -344,7 +491,7 @@ enum Command {
Tui {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -358,12 +505,12 @@ enum Command {
password: Option<String>,
},
/// Interactive 1:1 chat: type to send, incoming messages printed as [peer] <msg>. Ctrl+D to exit.
/// Interactive 1:1 chat: type to send, incoming messages printed as \[peer\] msg. Ctrl+D to exit.
/// In a two-person group, peer is chosen automatically; use --peer-key only with 3+ members.
Chat {
#[arg(
long,
default_value = "qpq-state.bin",
default_value = "qpc-state.bin",
env = "QPQ_STATE"
)]
state: PathBuf,
@@ -380,18 +527,18 @@ enum Command {
/// Export a conversation's message history to an encrypted, tamper-evident transcript file.
///
/// The output file uses Argon2id + ChaCha20-Poly1305 encryption with a SHA-256 hash chain
/// linking every record. Use `qpq export verify` to check chain integrity without decrypting.
/// linking every record. Use `qpc export verify` to check chain integrity without decrypting.
Export {
/// Path to the conversation database (.convdb file).
#[arg(long, default_value = "qpq-convdb.sqlite", env = "QPQ_CONV_DB")]
#[arg(long, default_value = "qpc-convdb.sqlite", env = "QPQ_CONV_DB")]
conv_db: PathBuf,
/// Conversation ID to export (32 hex chars = 16 bytes).
#[arg(long)]
conv_id: String,
/// Output path for the .qpqt transcript file.
#[arg(long, default_value = "transcript.qpqt")]
/// Output path for the .qpct transcript file.
#[arg(long, default_value = "transcript.qpct")]
output: PathBuf,
/// Password used to encrypt the transcript (separate from the state/DB password).
@@ -405,7 +552,7 @@ enum Command {
/// Verify the hash-chain integrity of a transcript file without decrypting content.
ExportVerify {
/// Path to the .qpqt transcript file to verify.
/// Path to the .qpct transcript file to verify.
#[arg(long)]
input: PathBuf,
},
@@ -418,7 +565,7 @@ enum Command {
playbook: PathBuf,
/// State file path (identity + MLS state).
#[arg(long, default_value = "qpq-state.bin", env = "QPQ_STATE")]
#[arg(long, default_value = "qpc-state.bin", env = "QPQ_STATE")]
state: PathBuf,
/// Server address (host:port).
@@ -441,14 +588,14 @@ enum Command {
// ── Helpers ───────────────────────────────────────────────────────────────────
#[cfg(not(feature = "v2"))]
/// Returns `qpq-{username}.bin` when `state` is still at the default
/// (`qpq-state.bin`) and a username has been provided. Otherwise returns
/// `state` unchanged. This lets `qpq --username alice` automatically isolate
/// Returns `qpc-{username}.bin` when `state` is still at the default
/// (`qpc-state.bin`) and a username has been provided. Otherwise returns
/// `state` unchanged. This lets `qpc --username alice` automatically isolate
/// Alice's state without requiring a manual `--state` flag.
fn derive_state_path(state: PathBuf, username: Option<&str>) -> PathBuf {
if state == Path::new("qpq-state.bin") {
if state == Path::new("qpc-state.bin") {
if let Some(uname) = username {
return PathBuf::from(format!("qpq-{uname}.bin"));
return PathBuf::from(format!("qpc-{uname}.bin"));
}
}
state
@@ -470,24 +617,24 @@ async fn run_playbook(
device_id: Option<&str>,
extra_vars: &[String],
) -> anyhow::Result<()> {
use quicproquo_client::PlaybookRunner;
use quicprochat_client::PlaybookRunner;
let insecure = std::env::var("QPQ_DANGER_ACCEPT_INVALID_CERTS").is_ok();
// Connect to server.
let client =
quicproquo_client::connect_node_opt(server, ca_cert, server_name, insecure)
quicprochat_client::connect_node_opt(server, ca_cert, server_name, insecure)
.await
.context("connect to server")?;
// Build session state.
let mut session = quicproquo_client::client::session::SessionState::load(state, state_pw)
let mut session = quicprochat_client::client::session::SessionState::load(state, state_pw)
.context("load session state")?;
// If username/password provided, do OPAQUE login.
if let (Some(uname), Some(pw)) = (username, password) {
if let Err(e) =
quicproquo_client::opaque_login(&client, uname, pw, &session.identity.public_key_bytes()).await
quicprochat_client::opaque_login(&client, uname, pw, &session.identity.public_key_bytes()).await
{
eprintln!("OPAQUE login failed: {e:#}");
}
@@ -540,6 +687,13 @@ async fn main() -> anyhow::Result<()> {
)
.init();
// Load config file and apply to env BEFORE clap parses (so config values
// act as defaults that env vars and CLI flags can override).
{
let cfg = client_config::load_client_config();
client_config::apply_config_to_env(&cfg);
}
let args = Args::parse();
if args.danger_accept_invalid_certs {

View File

@@ -1,7 +1,7 @@
//! v2 CLI command implementations — thin wrappers over the SDK.
use quicproquo_sdk::client::QpqClient;
use quicproquo_sdk::error::SdkError;
use quicprochat_sdk::client::QpqClient;
use quicprochat_sdk::error::SdkError;
/// Register a new user account via OPAQUE.
pub async fn cmd_register_user(
@@ -61,7 +61,7 @@ pub async fn cmd_health(client: &mut QpqClient) -> Result<(), SdkError> {
/// Resolve a username to its identity key.
pub async fn cmd_resolve(client: &mut QpqClient, username: &str) -> Result<(), SdkError> {
let rpc = client.rpc()?;
match quicproquo_sdk::users::resolve_user(rpc, username).await? {
match quicprochat_sdk::users::resolve_user(rpc, username).await? {
Some(key) => {
println!("{username} -> {}", hex::encode(&key));
}
@@ -75,7 +75,7 @@ pub async fn cmd_resolve(client: &mut QpqClient, username: &str) -> Result<(), S
/// List registered devices.
pub async fn cmd_devices_list(client: &mut QpqClient) -> Result<(), SdkError> {
let rpc = client.rpc()?;
let devices = quicproquo_sdk::devices::list_devices(rpc).await?;
let devices = quicprochat_sdk::devices::list_devices(rpc).await?;
if devices.is_empty() {
println!("no devices registered");
} else {
@@ -101,7 +101,7 @@ pub async fn cmd_devices_register(
let rpc = client.rpc()?;
let id_bytes = hex::decode(device_id)
.map_err(|e| SdkError::Other(anyhow::anyhow!("invalid device_id hex: {e}")))?;
let was_new = quicproquo_sdk::devices::register_device(rpc, &id_bytes, device_name).await?;
let was_new = quicprochat_sdk::devices::register_device(rpc, &id_bytes, device_name).await?;
if was_new {
println!("device registered: {device_name}");
} else {
@@ -118,7 +118,7 @@ pub async fn cmd_devices_revoke(
let rpc = client.rpc()?;
let id_bytes = hex::decode(device_id)
.map_err(|e| SdkError::Other(anyhow::anyhow!("invalid device_id hex: {e}")))?;
let revoked = quicproquo_sdk::devices::revoke_device(rpc, &id_bytes).await?;
let revoked = quicprochat_sdk::devices::revoke_device(rpc, &id_bytes).await?;
if revoked {
println!("device revoked: {device_id}");
} else {
@@ -131,12 +131,12 @@ pub async fn cmd_devices_revoke(
pub async fn cmd_recovery_setup(client: &mut QpqClient) -> Result<(), SdkError> {
// Load identity seed from state file.
let state_path = client.config_state_path();
let stored = quicproquo_sdk::state::load_state(&state_path, None)
let stored = quicprochat_sdk::state::load_state(&state_path, None)
.map_err(|e| SdkError::Crypto(format!("load identity for recovery: {e}")))?;
let rpc = client.rpc()?;
let codes =
quicproquo_sdk::recovery::setup_recovery(rpc, &stored.identity_seed, &[]).await?;
quicprochat_sdk::recovery::setup_recovery(rpc, &stored.identity_seed, &[]).await?;
println!("=== RECOVERY CODES ===");
println!("Save these codes securely. They will NOT be shown again.");
@@ -155,7 +155,7 @@ pub async fn cmd_recovery_setup(client: &mut QpqClient) -> Result<(), SdkError>
/// List pending outbox entries.
pub fn cmd_outbox_list(client: &QpqClient) -> Result<(), SdkError> {
let store = client.conversations()?;
let entries = quicproquo_sdk::outbox::list_pending(store)?;
let entries = quicprochat_sdk::outbox::list_pending(store)?;
if entries.is_empty() {
println!("outbox is empty — no pending messages");
} else {
@@ -178,7 +178,7 @@ pub fn cmd_outbox_list(client: &QpqClient) -> Result<(), SdkError> {
pub async fn cmd_outbox_retry(client: &mut QpqClient) -> Result<(), SdkError> {
let rpc = client.rpc()?;
let store = client.conversations()?;
let (sent, failed) = quicproquo_sdk::outbox::flush_outbox(rpc, store).await?;
let (sent, failed) = quicprochat_sdk::outbox::flush_outbox(rpc, store).await?;
println!("outbox flush: {sent} sent, {failed} permanently failed");
Ok(())
}
@@ -186,7 +186,7 @@ pub async fn cmd_outbox_retry(client: &mut QpqClient) -> Result<(), SdkError> {
/// Clear permanently failed outbox entries.
pub fn cmd_outbox_clear(client: &QpqClient) -> Result<(), SdkError> {
let store = client.conversations()?;
let cleared = quicproquo_sdk::outbox::clear_failed(store)?;
let cleared = quicprochat_sdk::outbox::clear_failed(store)?;
println!("cleared {cleared} failed outbox entries");
Ok(())
}
@@ -198,10 +198,10 @@ pub async fn cmd_recovery_restore(
) -> Result<(), SdkError> {
let rpc = client.rpc()?;
let (identity_seed, conversation_ids) =
quicproquo_sdk::recovery::recover_account(rpc, code).await?;
quicprochat_sdk::recovery::recover_account(rpc, code).await?;
// Restore identity.
let keypair = quicproquo_core::IdentityKeypair::from_seed(identity_seed);
let keypair = quicprochat_core::IdentityKeypair::from_seed(identity_seed);
client.set_identity_key(keypair.public_key_bytes().to_vec());
println!("account recovered successfully");
@@ -214,14 +214,14 @@ pub async fn cmd_recovery_restore(
}
// Save recovered state.
let state = quicproquo_sdk::state::StoredState {
let state = quicprochat_sdk::state::StoredState {
identity_seed,
group: None,
hybrid_key: None,
member_keys: Vec::new(),
};
let state_path = client.config_state_path();
quicproquo_sdk::state::save_state(&state_path, &state, None)?;
quicprochat_sdk::state::save_state(&state_path, &state, None)?;
println!("state saved to {}", state_path.display());
Ok(())

View File

@@ -1,4 +1,4 @@
//! v2 CLI entry point — thin shell over `quicproquo_sdk::QpqClient`.
//! v2 CLI entry point — thin shell over `quicprochat_sdk::QpqClient`.
//!
//! Activated via `--features v2`. Replaces the v1 Cap'n Proto RPC main
//! with a simplified command surface backed by the SDK.
@@ -10,15 +10,15 @@ use std::time::Duration;
use anyhow::Context;
use clap::{Parser, Subcommand};
use quicproquo_sdk::client::QpqClient;
use quicproquo_sdk::config::ClientConfig;
use quicprochat_sdk::client::QpqClient;
use quicprochat_sdk::config::ClientConfig;
use crate::v2_commands;
// ── CLI ───────────────────────────────────────────────────────────────────────
#[derive(Debug, Parser)]
#[command(name = "qpq", about = "quicproquo CLI client (v2)", version)]
#[command(name = "qpc", about = "quicprochat CLI client (v2)", version)]
struct Args {
/// Server address (host:port).
#[arg(long, global = true, default_value = "127.0.0.1:7000", env = "QPQ_SERVER")]
@@ -37,7 +37,7 @@ struct Args {
db_password: Option<String>,
/// Path to the client state file (identity key, MLS state).
#[arg(long, global = true, default_value = "qpq-state.bin", env = "QPQ_STATE")]
#[arg(long, global = true, default_value = "qpc-state.bin", env = "QPQ_STATE")]
state: PathBuf,
/// DANGER: Skip TLS certificate verification. Development only.
@@ -48,7 +48,7 @@ struct Args {
)]
danger_accept_invalid_certs: bool,
/// Do not auto-start a local qpq-server.
/// Do not auto-start a local qpc-server.
#[arg(long, global = true, env = "QPQ_NO_SERVER")]
no_server: bool,
@@ -210,17 +210,17 @@ impl Drop for ServerGuard {
}
}
/// Find the `qpq-server` binary: same directory as current exe, then PATH.
/// Find the `qpc-server` binary: same directory as current exe, then PATH.
fn find_server_binary() -> Option<PathBuf> {
if let Ok(exe) = std::env::current_exe() {
let sibling = exe.with_file_name("qpq-server");
let sibling = exe.with_file_name("qpc-server");
if sibling.exists() {
return Some(sibling);
}
}
std::env::var_os("PATH").and_then(|paths| {
std::env::split_paths(&paths)
.map(|dir| dir.join("qpq-server"))
.map(|dir| dir.join("qpc-server"))
.find(|p| p.exists())
})
}
@@ -241,7 +241,7 @@ async fn probe_server(server_addr: &str) -> bool {
.is_ok()
}
/// Start a local qpq-server if one isn't already running.
/// Start a local qpc-server if one isn't already running.
/// Returns a guard that kills the child on drop (if we started one).
async fn ensure_server_running(
server_addr: &str,
@@ -258,8 +258,8 @@ async fn ensure_server_running(
let binary = find_server_binary().ok_or_else(|| {
anyhow::anyhow!(
"server at {server_addr} is not reachable and qpq-server binary not found; \
start a server manually or install qpq-server"
"server at {server_addr} is not reachable and qpc-server binary not found; \
start a server manually or install qpc-server"
)
})?;
@@ -300,7 +300,7 @@ async fn ensure_server_running(
if start.elapsed() > max_wait {
anyhow::bail!(
"auto-started qpq-server but it did not become ready within {max_wait:?}"
"auto-started qpc-server but it did not become ready within {max_wait:?}"
);
}
@@ -336,9 +336,9 @@ async fn connect_client(args: &Args) -> anyhow::Result<QpqClient> {
// Try loading identity from state file.
if args.state.exists() {
match quicproquo_sdk::state::load_state(&args.state, args.db_password.as_deref()) {
match quicprochat_sdk::state::load_state(&args.state, args.db_password.as_deref()) {
Ok(stored) => {
let keypair = quicproquo_core::IdentityKeypair::from_seed(stored.identity_seed);
let keypair = quicprochat_core::IdentityKeypair::from_seed(stored.identity_seed);
client.set_identity_key(keypair.public_key_bytes().to_vec());
}
Err(e) => {
@@ -351,6 +351,25 @@ async fn connect_client(args: &Args) -> anyhow::Result<QpqClient> {
Ok(client)
}
/// Connect and return client + identity keypair (needed for MLS one-shot commands).
async fn connect_with_identity(
args: &Args,
) -> anyhow::Result<(QpqClient, std::sync::Arc<quicprochat_core::IdentityKeypair>)> {
let client = connect_client(args).await?;
let keypair = if args.state.exists() {
let stored =
quicprochat_sdk::state::load_state(&args.state, args.db_password.as_deref())
.context("load identity state — register or login first")?;
std::sync::Arc::new(quicprochat_core::IdentityKeypair::from_seed(
stored.identity_seed,
))
} else {
anyhow::bail!("no state file found at {} — register or login first", args.state.display());
};
Ok((client, keypair))
}
// ── Entry point ──────────────────────────────────────────────────────────────
pub fn main() {
@@ -414,13 +433,13 @@ async fn run(args: Args) -> anyhow::Result<()> {
let config = build_config(&args)?;
let mut client = QpqClient::new(config);
if args.state.exists() {
match quicproquo_sdk::state::load_state(
match quicprochat_sdk::state::load_state(
&args.state,
args.db_password.as_deref(),
) {
Ok(stored) => {
let keypair =
quicproquo_core::IdentityKeypair::from_seed(stored.identity_seed);
quicprochat_core::IdentityKeypair::from_seed(stored.identity_seed);
client.set_identity_key(keypair.public_key_bytes().to_vec());
}
Err(e) => {
@@ -446,34 +465,89 @@ async fn run(args: Args) -> anyhow::Result<()> {
}
Cmd::Dm { ref username } => {
let mut client = connect_client(&args).await?;
v2_commands::cmd_resolve(&mut client, username)
.await
.context("dm setup failed")?;
// For now, print the resolved key. Full DM creation requires
// MLS group state, which will be handled in the REPL flow.
println!("(DM creation with full MLS setup is available in the REPL)");
let (client, identity) = connect_with_identity(&args).await?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let peer_key = quicprochat_sdk::users::resolve_user(rpc, username)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{username}' not found"))?;
let key_package = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("no KeyPackage available for peer"))?;
let mut member = quicprochat_core::GroupMember::new(identity.clone());
let (conv_id, was_new) = quicprochat_sdk::groups::create_dm(
rpc, conv_store, &mut member, &identity,
&peer_key, &key_package, None, None,
).await?;
if was_new {
println!("DM with {username} created (id: {})", hex::encode(conv_id.0));
} else {
println!("DM with {username} resumed (id: {})", hex::encode(conv_id.0));
}
}
Cmd::Send { ref to, ref msg } => {
let _ = (to, msg);
let _client = connect_client(&args).await?;
// Full send requires MLS group state restoration — deferred to REPL.
println!("(send is currently available in the REPL; one-shot send coming soon)");
let (client, identity) = connect_with_identity(&args).await?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(to);
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation '{to}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let my_pub = identity.public_key_bytes();
let recipients: Vec<Vec<u8>> = conv
.member_keys
.iter()
.filter(|k| k.as_slice() != my_pub.as_slice())
.cloned()
.collect();
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let hybrid_keys = vec![None; recipients.len()];
quicprochat_sdk::messaging::send_message(
rpc, &mut member, &identity, msg, &recipients, &hybrid_keys, conv_id.0.as_slice(),
).await?;
quicprochat_sdk::groups::save_mls_state(conv_store, &conv_id, &member)?;
println!("sent to {to}");
}
Cmd::Recv { ref from } => {
let _ = from;
let _client = connect_client(&args).await?;
println!("(recv is currently available in the REPL; one-shot recv coming soon)");
let (client, identity) = connect_with_identity(&args).await?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(from);
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("conversation '{from}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let my_key = identity.public_key_bytes();
let messages = quicprochat_sdk::messaging::receive_messages(
rpc, &mut member, my_key.as_slice(), None, conv_id.0.as_slice(), &[],
).await?;
quicprochat_sdk::groups::save_mls_state(conv_store, &conv_id, &member)?;
if messages.is_empty() {
println!("no new messages");
} else {
for msg in &messages {
let sender_short = hex::encode(&msg.sender_key[..4]);
let body = match &msg.message {
quicprochat_core::AppMessage::Chat { body, .. } => {
String::from_utf8_lossy(body).to_string()
}
other => format!("{other:?}"),
};
println!("[{sender_short}] {body}");
}
}
}
Cmd::Group {
action: GroupCmd::Create { ref name },
} => {
let _ = name;
let _client = connect_client(&args).await?;
println!("(group create is currently available in the REPL; one-shot coming soon)");
let (_client, identity) = connect_with_identity(&args).await?;
let conv_store = _client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let mut member = quicprochat_core::GroupMember::new(identity.clone());
let conv_id = quicprochat_sdk::groups::create_group(conv_store, &mut member, name)?;
println!("group '{name}' created (id: {})", hex::encode(conv_id.0));
}
Cmd::Group {
@@ -483,9 +557,26 @@ async fn run(args: Args) -> anyhow::Result<()> {
ref user,
},
} => {
let _ = (group, user);
let _client = connect_client(&args).await?;
println!("(group invite is currently available in the REPL; one-shot coming soon)");
let (client, identity) = connect_with_identity(&args).await?;
let rpc = client.rpc().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_store = client.conversations().map_err(|e| anyhow::anyhow!("{e}"))?;
let conv_id = quicprochat_sdk::conversation::ConversationId::from_group_name(group);
let conv = conv_store
.load_conversation(&conv_id)?
.ok_or_else(|| anyhow::anyhow!("group '{group}' not found"))?;
let mut member = quicprochat_sdk::groups::restore_mls_state(&conv, &identity)?;
// Resolve peer identity key and fetch their KeyPackage.
let peer_key = quicprochat_sdk::users::resolve_user(rpc, user)
.await?
.ok_or_else(|| anyhow::anyhow!("user '{user}' not found"))?;
let key_package = quicprochat_sdk::keys::fetch_key_package(rpc, &peer_key)
.await?
.ok_or_else(|| anyhow::anyhow!("no KeyPackage available for peer"))?;
quicprochat_sdk::groups::invite_to_group(
rpc, conv_store, &mut member, &identity, &conv_id,
&peer_key, &key_package, None, None,
).await?;
println!("invited {user} to '{group}'");
}
Cmd::Devices {

View File

@@ -1,4 +1,4 @@
// cargo_bin! only works for current package's binary; we spawn qpq-server from another package.
// cargo_bin! only works for current package's binary; we spawn qpc-server from another package.
#![allow(deprecated)]
#![allow(clippy::unwrap_used)]
#![allow(clippy::await_holding_lock)] // AUTH_LOCK intentionally held across await to serialize tests
@@ -18,12 +18,12 @@ fn ensure_rustls_provider() {
use sha2::{Sha256, Digest};
use quicproquo_client::{
use quicprochat_client::{
cmd_create_group, cmd_invite, cmd_join, cmd_login, cmd_ping, cmd_register_state,
cmd_register_user, cmd_send, connect_node, create_channel, enqueue, fetch_wait, init_auth,
opaque_login, receive_pending_plaintexts, resolve_user, ClientAuth,
};
use quicproquo_core::{GroupMember, HybridKeypair, IdentityKeypair, ReceivedMessage};
use quicprochat_core::{GroupMember, HybridKeypair, IdentityKeypair, ReceivedMessage};
/// Serialises ALL tests that call `init_auth` to prevent the global `AUTH_CONTEXT`
/// from being overwritten by concurrent tests. Every test that mutates auth state
@@ -71,7 +71,7 @@ fn spawn_server(base: &std::path::Path, extra_args: &[&str]) -> (String, PathBuf
let tls_key = base.join("server-key.der");
let data_dir = base.join("data");
let server_bin = cargo_bin("qpq-server");
let server_bin = cargo_bin("qpc-server");
let mut cmd = Command::new(server_bin);
cmd.arg("--listen")
.arg(&listen)
@@ -948,14 +948,14 @@ async fn e2e_dm_multi_message_epoch_synchronized() -> anyhow::Result<()> {
/// Helper: load a state file and reconstruct a GroupMember with its keystore.
fn load_member(state_path: &std::path::Path) -> (GroupMember, Option<HybridKeypair>) {
let bytes = std::fs::read(state_path).expect("read state");
let state: quicproquo_client::client::state::StoredState =
let state: quicprochat_client::client::state::StoredState =
bincode::deserialize(&bytes).expect("decode state");
state.into_parts(state_path).expect("into_parts")
}
/// Helper: save a GroupMember back to its state file.
fn save_member(state_path: &std::path::Path, member: &GroupMember, hybrid: Option<&HybridKeypair>) {
quicproquo_client::client::state::save_state(state_path, member, hybrid, None)
quicprochat_client::client::state::save_state(state_path, member, hybrid, None)
.expect("save state");
}
@@ -1394,7 +1394,7 @@ async fn e2e_file_upload_download() -> anyhow::Result<()> {
{
let mut p = req.get();
let mut auth = p.reborrow().init_auth();
quicproquo_client::client::rpc::set_auth(&mut auth)?;
quicprochat_client::client::rpc::set_auth(&mut auth)?;
p.set_blob_hash(&hash);
p.set_chunk(file_data);
p.set_offset(0);
@@ -1426,7 +1426,7 @@ async fn e2e_file_upload_download() -> anyhow::Result<()> {
{
let mut p = req.get();
let mut auth = p.reborrow().init_auth();
quicproquo_client::client::rpc::set_auth(&mut auth)?;
quicprochat_client::client::rpc::set_auth(&mut auth)?;
p.set_blob_id(&blob_id);
p.set_offset(0);
p.set_length(file_data.len() as u32);
@@ -1463,7 +1463,7 @@ async fn e2e_file_upload_download() -> anyhow::Result<()> {
{
let mut p = req.get();
let mut auth = p.reborrow().init_auth();
quicproquo_client::client::rpc::set_auth(&mut auth)?;
quicprochat_client::client::rpc::set_auth(&mut auth)?;
p.set_blob_id(&blob_id);
p.set_offset(100);
p.set_length(200);
@@ -1521,7 +1521,7 @@ async fn e2e_blob_hash_mismatch() -> anyhow::Result<()> {
{
let mut p = req.get();
let mut auth = p.reborrow().init_auth();
quicproquo_client::client::rpc::set_auth(&mut auth)?;
quicprochat_client::client::rpc::set_auth(&mut auth)?;
p.set_blob_hash(&wrong_hash);
p.set_chunk(&chunk_data[..]);
p.set_offset(0);
@@ -1560,7 +1560,7 @@ fn spawn_server_custom(base: &std::path::Path, args: &[&str]) -> (String, PathBu
let tls_key = base.join("server-key.der");
let data_dir = base.join("data");
let server_bin = cargo_bin("qpq-server");
let server_bin = cargo_bin("qpc-server");
let mut cmd = Command::new(server_bin);
cmd.arg("--listen")
.arg(&listen)
@@ -1888,7 +1888,7 @@ async fn e2e_keypackage_exhaustion_graceful() -> anyhow::Result<()> {
// Now try to fetch A's KeyPackage again — it should be exhausted.
let client = local.run_until(connect_node(&server, &ca_cert, "localhost")).await?;
let pkg = local
.run_until(quicproquo_client::client::rpc::fetch_key_package(&client, &a_pk))
.run_until(quicprochat_client::client::rpc::fetch_key_package(&client, &a_pk))
.await?;
// Graceful: either empty (no package available) or an error — but NOT a panic.

View File

@@ -1,9 +1,10 @@
[package]
name = "quicproquo-core"
name = "quicprochat-core"
version = "0.1.0"
edition = "2021"
description = "Crypto primitives, MLS state machine, and hybrid post-quantum KEM for quicproquo."
license = "MIT"
edition.workspace = true
description = "Crypto primitives, MLS state machine, and hybrid post-quantum KEM for quicprochat."
license = "Apache-2.0 OR MIT"
repository.workspace = true
[features]
default = ["native"]
@@ -14,11 +15,12 @@ native = [
"dep:openmls",
"dep:openmls_rust_crypto",
"dep:openmls_traits",
"dep:openmls_memory_storage",
"dep:tls_codec",
"dep:opaque-ke",
"dep:bincode",
"dep:capnp",
"dep:quicproquo-proto",
"dep:quicprochat-proto",
"dep:tokio",
]
@@ -48,12 +50,13 @@ opaque-ke = { workspace = true, optional = true }
openmls = { workspace = true, optional = true }
openmls_rust_crypto = { workspace = true, optional = true }
openmls_traits = { workspace = true, optional = true }
openmls_memory_storage = { workspace = true, optional = true }
tls_codec = { workspace = true, optional = true }
bincode = { workspace = true, optional = true }
# Serialisation (native only)
capnp = { workspace = true, optional = true }
quicproquo-proto = { path = "../quicproquo-proto", optional = true }
quicprochat-proto = { path = "../quicprochat-proto", optional = true }
# Async runtime (native only)
tokio = { workspace = true, optional = true }

View File

@@ -8,7 +8,7 @@
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
use quicproquo_core::{compute_safety_number, IdentityKeypair, padding};
use quicprochat_core::{compute_safety_number, IdentityKeypair, padding};
// ── Identity keypair benchmarks ──────────────────────────────────────────────
@@ -48,7 +48,7 @@ fn bench_identity_verify(c: &mut Criterion) {
// ── Sealed sender benchmarks ─────────────────────────────────────────────────
fn bench_sealed_sender(c: &mut Criterion) {
use quicproquo_core::sealed_sender::{seal, unseal};
use quicprochat_core::sealed_sender::{seal, unseal};
let sizes: &[(&str, usize)] = &[
("32B", 32),

View File

@@ -6,7 +6,7 @@
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
use quicproquo_core::{hybrid_encrypt, hybrid_decrypt, HybridKeypair};
use quicprochat_core::{hybrid_encrypt, hybrid_decrypt, HybridKeypair};
// ── Classical baseline (X25519 + ChaCha20-Poly1305) ─────────────────────────

View File

@@ -7,7 +7,7 @@
use std::sync::Arc;
use criterion::{criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion};
use quicproquo_core::{GroupMember, IdentityKeypair};
use quicprochat_core::{GroupMember, IdentityKeypair};
/// Create identities and a group of the given size.
/// Returns (creator, Vec<members>).

View File

@@ -11,17 +11,17 @@ use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criteri
fn capnp_serialize_envelope(seq: u64, data: &[u8]) -> Vec<u8> {
let mut msg = capnp::message::Builder::new_default();
{
let mut envelope = msg.init_root::<quicproquo_proto::node_capnp::envelope::Builder>();
let mut envelope = msg.init_root::<quicprochat_proto::node_capnp::envelope::Builder>();
envelope.set_seq(seq);
envelope.set_data(data);
}
quicproquo_proto::to_bytes(&msg).unwrap()
quicprochat_proto::to_bytes(&msg).unwrap()
}
fn capnp_deserialize_envelope(bytes: &[u8]) -> (u64, Vec<u8>) {
let reader = quicproquo_proto::from_bytes(bytes).unwrap();
let reader = quicprochat_proto::from_bytes(bytes).unwrap();
let envelope = reader
.get_root::<quicproquo_proto::node_capnp::envelope::Reader>()
.get_root::<quicprochat_proto::node_capnp::envelope::Reader>()
.unwrap();
(envelope.get_seq(), envelope.get_data().unwrap().to_vec())
}

View File

@@ -1,5 +1,5 @@
syntax = "proto3";
package quicproquo.bench;
package quicprochat.bench;
// Equivalent to the Envelope struct in delivery.capnp
message Envelope {

View File

@@ -1,4 +1,4 @@
//! Error types for `quicproquo-core`.
//! Error types for `quicprochat-core`.
use thiserror::Error;

View File

@@ -29,7 +29,7 @@
//! # Ratchet tree
//!
//! `use_ratchet_tree_extension = true` so that the ratchet tree is embedded
//! in Welcome messages. `new_from_welcome` is called with `ratchet_tree = None`;
//! in Welcome messages. `new_from_welcome` is called without a ratchet_tree;
//! openmls extracts the tree from the Welcome's `GroupInfo` extension.
use std::{path::Path, sync::Arc};
@@ -37,12 +37,13 @@ use std::{path::Path, sync::Arc};
use zeroize::Zeroizing;
use openmls::prelude::{
Ciphersuite, Credential, CredentialType, CredentialWithKey, CryptoConfig, GroupId, KeyPackage,
KeyPackageIn, MlsGroup, MlsGroupConfig, MlsMessageInBody, MlsMessageOut,
ProcessedMessageContent, ProtocolMessage, ProtocolVersion, TlsDeserializeTrait,
TlsSerializeTrait,
BasicCredential, Ciphersuite, Credential, CredentialWithKey, GroupId, KeyPackage,
KeyPackageIn, LeafNodeParameters, MlsGroup, MlsGroupCreateConfig, MlsGroupJoinConfig,
MlsMessageBodyIn, MlsMessageOut, ProcessedMessageContent, ProtocolMessage,
ProtocolVersion, StagedWelcome,
};
use openmls_traits::OpenMlsCryptoProvider;
use openmls_traits::OpenMlsProvider;
use tls_codec::{Deserialize as TlsDeserializeTrait, Serialize as TlsSerializeTrait};
use crate::{
error::CoreError,
@@ -102,8 +103,10 @@ pub struct GroupMember {
identity: Arc<IdentityKeypair>,
/// Active MLS group, if any.
group: Option<MlsGroup>,
/// Shared group configuration (wire format, ratchet tree extension, etc.).
config: MlsGroupConfig,
/// Shared group creation configuration (wire format, ratchet tree extension, etc.).
create_config: MlsGroupCreateConfig,
/// Shared group join configuration (wire format, ratchet tree extension, etc.).
join_config: MlsGroupJoinConfig,
/// Whether this member uses hybrid (X25519 + ML-KEM-768) HPKE keys.
hybrid: bool,
}
@@ -139,7 +142,11 @@ impl GroupMember {
group: Option<MlsGroup>,
hybrid: bool,
) -> Self {
let config = MlsGroupConfig::builder()
let create_config = MlsGroupCreateConfig::builder()
.use_ratchet_tree_extension(true)
.build();
let join_config = MlsGroupJoinConfig::builder()
.use_ratchet_tree_extension(true)
.build();
@@ -153,7 +160,8 @@ impl GroupMember {
backend,
identity,
group,
config,
create_config,
join_config,
hybrid,
}
}
@@ -175,18 +183,19 @@ impl GroupMember {
///
/// Returns [`CoreError::Mls`] if openmls fails to create the KeyPackage.
pub fn generate_key_package(&mut self) -> Result<Vec<u8>, CoreError> {
let credential_with_key = self.make_credential_with_key()?;
let credential_with_key = self.make_credential_with_key();
let key_package = KeyPackage::builder()
let key_package_bundle = KeyPackage::builder()
.build(
CryptoConfig::with_default_version(CIPHERSUITE),
CIPHERSUITE,
&self.backend,
self.identity.as_ref(),
credential_with_key,
)
.map_err(|e| CoreError::Mls(format!("{e:?}")))?;
key_package
key_package_bundle
.key_package()
.tls_serialize_detached()
.map_err(|e| CoreError::Mls(format!("{e:?}")))
}
@@ -205,13 +214,13 @@ impl GroupMember {
///
/// Returns [`CoreError::Mls`] if the group already exists or openmls fails.
pub fn create_group(&mut self, group_id: &[u8]) -> Result<(), CoreError> {
let credential_with_key = self.make_credential_with_key()?;
let credential_with_key = self.make_credential_with_key();
let mls_id = GroupId::from_slice(group_id);
let group = MlsGroup::new_with_group_id(
&self.backend,
self.identity.as_ref(),
&self.config,
&self.create_config,
mls_id,
credential_with_key,
)
@@ -303,7 +312,7 @@ impl GroupMember {
let leaf_index = group
.members()
.find(|m| m.credential.identity() == member_identity)
.find(|m| m.credential.serialized_content() == member_identity)
.map(|m| m.index)
.ok_or_else(|| CoreError::Mls("member not found in group".into()))?;
@@ -384,7 +393,11 @@ impl GroupMember {
.ok_or_else(|| CoreError::Mls("no active group".into()))?;
let (proposal_out, _ref) = group
.propose_self_update(&self.backend, self.identity.as_ref(), None)
.propose_self_update(
&self.backend,
self.identity.as_ref(),
LeafNodeParameters::default(),
)
.map_err(|e| CoreError::Mls(format!("propose_self_update: {e:?}")))?;
proposal_out
@@ -396,7 +409,7 @@ impl GroupMember {
pub fn has_pending_proposals(&self) -> bool {
self.group
.as_ref()
.map(|g| g.pending_proposals().next().is_some())
.map(|g| g.has_pending_proposals())
.unwrap_or(false)
}
@@ -417,17 +430,23 @@ impl GroupMember {
let msg_in = openmls::prelude::MlsMessageIn::tls_deserialize(&mut welcome_bytes)
.map_err(|e| CoreError::Mls(format!("Welcome deserialise: {e:?}")))?;
// into_welcome() is feature-gated in openmls 0.5; extract() is public.
let welcome = match msg_in.extract() {
MlsMessageInBody::Welcome(w) => w,
MlsMessageBodyIn::Welcome(w) => w,
_ => return Err(CoreError::Mls("expected a Welcome message".into())),
};
// ratchet_tree = None because use_ratchet_tree_extension = true embeds
// the tree inside the Welcome's GroupInfo extension.
let group = MlsGroup::new_from_welcome(&self.backend, &self.config, welcome, None)
let staged = StagedWelcome::new_from_welcome(
&self.backend,
&self.join_config,
welcome,
None, // ratchet tree extracted from the Welcome's GroupInfo extension
)
.map_err(|e| CoreError::Mls(format!("new_from_welcome: {e:?}")))?;
let group = staged
.into_group(&self.backend)
.map_err(|e| CoreError::Mls(format!("into_group: {e:?}")))?;
self.group = Some(group);
Ok(())
}
@@ -508,10 +527,9 @@ impl GroupMember {
let msg_in = openmls::prelude::MlsMessageIn::tls_deserialize(&mut bytes)
.map_err(|e| CoreError::Mls(format!("message deserialise: {e:?}")))?;
// into_protocol_message() is feature-gated; extract() + manual construction is not.
let protocol_message = match msg_in.extract() {
MlsMessageInBody::PrivateMessage(m) => ProtocolMessage::PrivateMessage(m),
MlsMessageInBody::PublicMessage(m) => ProtocolMessage::PublicMessage(m),
let protocol_message: ProtocolMessage = match msg_in.extract() {
MlsMessageBodyIn::PrivateMessage(m) => m.into(),
MlsMessageBodyIn::PublicMessage(m) => m.into(),
_ => return Err(CoreError::Mls("not a protocol message".into())),
};
@@ -519,7 +537,7 @@ impl GroupMember {
.process_message(&self.backend, protocol_message)
.map_err(|e| CoreError::Mls(format!("process_message: {e:?}")))?;
let sender_identity = processed.credential().identity().to_vec();
let sender_identity = processed.credential().serialized_content().to_vec();
match processed.into_content() {
ProcessedMessageContent::ApplicationMessage(app) => {
@@ -545,11 +563,15 @@ impl GroupMember {
}
// Proposals are stored for a later Commit; nothing to return yet.
ProcessedMessageContent::ProposalMessage(proposal) => {
group.store_pending_proposal(*proposal);
group
.store_pending_proposal(self.backend.storage(), *proposal)
.map_err(|e| CoreError::Mls(format!("store_pending_proposal: {e:?}")))?;
Ok((sender_identity, ReceivedMessage::StateChanged))
}
ProcessedMessageContent::ExternalJoinProposalMessage(proposal) => {
group.store_pending_proposal(*proposal);
group
.store_pending_proposal(self.backend.storage(), *proposal)
.map_err(|e| CoreError::Mls(format!("store_pending_proposal: {e:?}")))?;
Ok((sender_identity, ReceivedMessage::StateChanged))
}
}
@@ -597,6 +619,69 @@ impl GroupMember {
self.group.as_ref()
}
/// Serialize the MLS group state (via the backing `StorageProvider`).
///
/// In openmls 0.8 the `MlsGroup` is no longer `Serialize`; its state is
/// held inside the `StorageProvider`. This method serializes the full
/// provider storage to bytes, which can later be restored with
/// [`new_from_storage_bytes`].
///
/// Returns `None` if no active group exists.
///
/// [`new_from_storage_bytes`]: Self::new_from_storage_bytes
pub fn serialize_mls_state(&self) -> Result<Option<Vec<u8>>, CoreError> {
if self.group.is_none() {
return Ok(None);
}
let bytes = self
.backend
.storage()
.to_bytes()
.map_err(|e| CoreError::Mls(format!("serialize storage: {e}")))?;
Ok(Some(bytes))
}
/// Create a `GroupMember` from previously serialized storage bytes.
///
/// Reconstructs the `DiskKeyStore` from the blob, then loads the
/// `MlsGroup` from the storage provider using the given `group_id`.
pub fn new_from_storage_bytes(
identity: Arc<IdentityKeypair>,
storage_bytes: &[u8],
group_id: &[u8],
hybrid: bool,
) -> Result<Self, CoreError> {
let key_store = DiskKeyStore::from_bytes(storage_bytes)
.map_err(|e| CoreError::Mls(format!("deserialize storage: {e}")))?;
let create_config = MlsGroupCreateConfig::builder()
.use_ratchet_tree_extension(true)
.build();
let join_config = MlsGroupJoinConfig::builder()
.use_ratchet_tree_extension(true)
.build();
let backend = if hybrid {
HybridCryptoProvider::new_hybrid(key_store)
} else {
HybridCryptoProvider::new_classical(key_store)
};
let mls_group_id = GroupId::from_slice(group_id);
let group = MlsGroup::load(backend.storage(), &mls_group_id)
.map_err(|e| CoreError::Mls(format!("load group from storage: {e}")))?;
Ok(Self {
backend,
identity,
group,
create_config,
join_config,
hybrid,
})
}
/// Return the identity (credential) bytes of all current group members.
///
/// Each entry is the raw credential payload (Ed25519 public key bytes)
@@ -608,23 +693,20 @@ impl GroupMember {
};
group
.members()
.map(|m| m.credential.identity().to_vec())
.map(|m| m.credential.serialized_content().to_vec())
.collect()
}
// ── Private helpers ───────────────────────────────────────────────────────
fn make_credential_with_key(&self) -> Result<CredentialWithKey, CoreError> {
let credential = Credential::new(
self.identity.public_key_bytes().to_vec(),
CredentialType::Basic,
)
.map_err(|e| CoreError::Mls(format!("{e:?}")))?;
fn make_credential_with_key(&self) -> CredentialWithKey {
let credential: Credential =
BasicCredential::new(self.identity.public_key_bytes().to_vec()).into();
Ok(CredentialWithKey {
CredentialWithKey {
credential,
signature_key: self.identity.public_key_bytes().to_vec().into(),
})
}
}
}
@@ -758,11 +840,6 @@ mod tests {
let (_commit_a, welcome_a) = creator.add_member(&a_kp).expect("add A");
a.join_group(&welcome_a).expect("A join");
// A must process the commit that added them (it's a StateChanged for A since
// the commit itself is what brought them in — but actually A joined via Welcome,
// so A doesn't process the add-commit). The creator already merged the pending
// commit in add_member, so creator is at epoch 2.
// Add B — at this point creator is at epoch 2 (after adding A).
let (commit_b, welcome_b) = creator.add_member(&b_kp).expect("add B");
b.join_group(&welcome_b).expect("B join");
@@ -958,7 +1035,7 @@ mod tests {
);
}
/// 10 messages alternating AliceBob and BobAlice all decrypt successfully.
/// 10 messages alternating Alice->Bob and Bob->Alice all decrypt successfully.
/// Verifies that epoch state stays in sync across multiple application messages.
#[test]
fn multi_message_roundtrip_epoch_stays_in_sync() {
@@ -1002,4 +1079,96 @@ mod tests {
"send_message before join must return an error"
);
}
/// Measure actual MLS artifact sizes for mesh planning.
/// These numbers inform the MLS-Lite design and constrained link feasibility.
#[test]
fn measure_mls_wire_sizes() {
let creator_id = Arc::new(IdentityKeypair::generate());
let joiner_id = Arc::new(IdentityKeypair::generate());
let mut creator = GroupMember::new(Arc::clone(&creator_id));
let mut joiner = GroupMember::new(Arc::clone(&joiner_id));
// 1. KeyPackage size
let kp_bytes = joiner.generate_key_package().expect("generate KP");
println!("=== MLS Wire Format Sizes ===");
println!("KeyPackage: {} bytes", kp_bytes.len());
// 2. Create group (no wire message, just local state)
creator.create_group(b"size-test").expect("create group");
// 3. Add member -> Commit + Welcome
let (commit_bytes, welcome_bytes) = creator.add_member(&kp_bytes).expect("add member");
println!("Commit (add): {} bytes", commit_bytes.len());
println!("Welcome: {} bytes", welcome_bytes.len());
// Join the group
joiner.join_group(&welcome_bytes).expect("join");
// 4. Application message (short payload)
let short_msg = creator.send_message(b"hello").expect("short msg");
println!("AppMessage (5B): {} bytes", short_msg.len());
// 5. Application message (medium payload ~100 bytes)
let medium_payload = vec![0x42u8; 100];
let medium_msg = creator.send_message(&medium_payload).expect("medium msg");
println!("AppMessage (100B): {} bytes", medium_msg.len());
// 6. Self-update proposal
let update_proposal = creator.propose_self_update().expect("update proposal");
println!("UpdateProposal: {} bytes", update_proposal.len());
// Joiner processes the proposal
joiner.receive_message(&update_proposal).expect("recv proposal");
// 7. Commit (update only, no welcome)
let (update_commit, _) = joiner.commit_pending_proposals().expect("commit update");
println!("Commit (update): {} bytes", update_commit.len());
// Summary for LoRa feasibility
println!("\n=== LoRa Feasibility (SF12/BW125, MTU=51 bytes) ===");
println!("KeyPackage: {} fragments ({:.0}s at 1% duty)",
(kp_bytes.len() + 50) / 51,
(kp_bytes.len() as f64 / 51.0).ceil() * 36.0 / 60.0);
println!("Welcome: {} fragments ({:.0}s at 1% duty)",
(welcome_bytes.len() + 50) / 51,
(welcome_bytes.len() as f64 / 51.0).ceil() * 36.0 / 60.0);
println!("AppMessage (5B): {} fragments",
(short_msg.len() + 50) / 51);
// Assertions to catch regressions / validate estimates
assert!(kp_bytes.len() < 1000, "KeyPackage should be under 1KB");
assert!(welcome_bytes.len() < 3000, "Welcome should be under 3KB");
assert!(short_msg.len() < 300, "Short AppMessage should be under 300B");
}
/// Measure MLS sizes with hybrid (post-quantum) mode enabled.
#[test]
fn measure_mls_wire_sizes_hybrid() {
let creator_id = Arc::new(IdentityKeypair::generate());
let joiner_id = Arc::new(IdentityKeypair::generate());
let mut creator = GroupMember::new_hybrid(Arc::clone(&creator_id));
let mut joiner = GroupMember::new_hybrid(Arc::clone(&joiner_id));
// KeyPackage with hybrid (X25519 + ML-KEM-768) init key
let kp_bytes = joiner.generate_key_package().expect("generate hybrid KP");
println!("=== MLS Wire Format Sizes (Hybrid PQ Mode) ===");
println!("KeyPackage (PQ): {} bytes", kp_bytes.len());
creator.create_group(b"hybrid-size-test").expect("create group");
let (commit_bytes, welcome_bytes) = creator.add_member(&kp_bytes).expect("add member");
println!("Commit (add, PQ): {} bytes", commit_bytes.len());
println!("Welcome (PQ): {} bytes", welcome_bytes.len());
joiner.join_group(&welcome_bytes).expect("join");
let short_msg = creator.send_message(b"hello").expect("short msg");
println!("AppMessage (PQ): {} bytes", short_msg.len());
// PQ KeyPackages are larger due to ML-KEM-768 public key (1184 bytes)
assert!(kp_bytes.len() > 1000, "Hybrid KeyPackage should be >1KB due to ML-KEM");
assert!(kp_bytes.len() < 3000, "Hybrid KeyPackage should be <3KB");
}
}

View File

@@ -27,8 +27,9 @@ use openmls_traits::{
crypto::OpenMlsCrypto,
types::{
CryptoError, ExporterSecret, HpkeCiphertext, HpkeConfig, HpkeKeyPair, HpkeKemType,
KemOutput,
},
OpenMlsCryptoProvider,
OpenMlsProvider,
};
use tls_codec::SecretVLBytes;
@@ -128,6 +129,15 @@ impl OpenMlsCrypto for HybridCrypto {
self.rust_crypto.hkdf_extract(hash_type, salt, ikm)
}
fn hmac(
&self,
hash_type: HashType,
key: &[u8],
message: &[u8],
) -> Result<SecretVLBytes, CryptoError> {
self.rust_crypto.hmac(hash_type, key, message)
}
fn hkdf_expand(
&self,
hash_type: HashType,
@@ -189,25 +199,18 @@ impl OpenMlsCrypto for HybridCrypto {
info: &[u8],
aad: &[u8],
ptxt: &[u8],
) -> HpkeCiphertext {
) -> Result<HpkeCiphertext, CryptoError> {
if Self::is_hybrid_public_key(pk_r) {
// The trait `OpenMlsCrypto::hpke_seal` returns `HpkeCiphertext` (not
// `Result`), so we cannot propagate errors through the return type.
// Returning an empty ciphertext would silently cause data loss.
// Instead, panic on failure — a hybrid key that passes the length
// check but fails deserialization or encryption indicates a critical
// bug (corrupted key material), not a recoverable condition.
let recipient_pk = HybridPublicKey::from_bytes(pk_r)
.expect("hybrid public key deserialization failed — key material is corrupted");
// Pass HPKE info and aad through for proper context binding (RFC 9180).
.map_err(|_| CryptoError::CryptoLibraryError)?;
let envelope = hybrid_encrypt(&recipient_pk, ptxt, info, aad)
.expect("hybrid HPKE encryption failed — critical crypto error");
.map_err(|_| CryptoError::CryptoLibraryError)?;
let kem_output = envelope[..HYBRID_KEM_OUTPUT_LEN].to_vec();
let ciphertext = envelope[HYBRID_KEM_OUTPUT_LEN..].to_vec();
HpkeCiphertext {
Ok(HpkeCiphertext {
kem_output: kem_output.into(),
ciphertext: ciphertext.into(),
}
})
} else {
self.rust_crypto.hpke_seal(config, pk_r, info, aad, ptxt)
}
@@ -245,7 +248,7 @@ impl OpenMlsCrypto for HybridCrypto {
info: &[u8],
exporter_context: &[u8],
exporter_length: usize,
) -> Result<(Vec<u8>, ExporterSecret), CryptoError> {
) -> Result<(KemOutput, ExporterSecret), CryptoError> {
if Self::is_hybrid_public_key(pk_r) {
// A key that passes the hybrid length check but fails deserialization
// is corrupted — return an error instead of silently downgrading to
@@ -286,14 +289,14 @@ impl OpenMlsCrypto for HybridCrypto {
}
}
fn derive_hpke_keypair(&self, config: HpkeConfig, ikm: &[u8]) -> HpkeKeyPair {
fn derive_hpke_keypair(&self, config: HpkeConfig, ikm: &[u8]) -> Result<HpkeKeyPair, CryptoError> {
if self.hybrid_enabled && config.0 == HpkeKemType::DhKem25519 {
let kp = HybridKeypair::derive_from_ikm(ikm);
let private_bytes = kp.private_to_bytes();
HpkeKeyPair {
Ok(HpkeKeyPair {
private: private_bytes.as_slice().into(),
public: kp.public_key().to_bytes(),
}
})
} else {
self.rust_crypto.derive_hpke_keypair(config, ikm)
}
@@ -343,10 +346,10 @@ impl Default for HybridCryptoProvider {
}
}
impl OpenMlsCryptoProvider for HybridCryptoProvider {
impl OpenMlsProvider for HybridCryptoProvider {
type CryptoProvider = HybridCrypto;
type RandProvider = RustCrypto;
type KeyStoreProvider = DiskKeyStore;
type StorageProvider = DiskKeyStore;
fn crypto(&self) -> &Self::CryptoProvider {
&self.crypto
@@ -356,7 +359,7 @@ impl OpenMlsCryptoProvider for HybridCryptoProvider {
self.crypto.rust_crypto()
}
fn key_store(&self) -> &Self::KeyStoreProvider {
fn storage(&self) -> &Self::StorageProvider {
&self.key_store
}
}
@@ -383,7 +386,7 @@ mod tests {
let crypto = HybridCrypto::new();
let ikm = b"test-ikm-for-hybrid-hpke-keypair";
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm);
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm).unwrap();
assert_eq!(keypair.public.len(), HYBRID_PUBLIC_KEY_LEN);
assert_eq!(keypair.private.as_ref().len(), HYBRID_PRIVATE_KEY_LEN);
@@ -397,7 +400,7 @@ mod tests {
info,
aad,
plaintext,
);
).unwrap();
assert!(!ct.kem_output.as_slice().is_empty());
assert!(!ct.ciphertext.as_slice().is_empty());
@@ -419,7 +422,7 @@ mod tests {
let crypto = HybridCrypto::new();
let ikm = b"exporter-ikm";
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm);
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm).unwrap();
let info = b"";
let exporter_context = b"MLS 1.0 external init";
let exporter_length = 32;
@@ -457,7 +460,7 @@ mod tests {
let crypto = HybridCrypto::new_classical();
let ikm = b"test-ikm-for-classical-hpke";
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm);
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm).unwrap();
// Classical X25519 keys are 32 bytes
assert_eq!(keypair.public.len(), 32);
assert_eq!(keypair.private.as_ref().len(), 32);
@@ -469,7 +472,7 @@ mod tests {
let crypto = HybridCrypto::new_classical();
let ikm = b"test-ikm-for-classical-round-trip";
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm);
let keypair = crypto.derive_hpke_keypair(hpke_config_dhkem_x25519(), ikm).unwrap();
assert_eq!(keypair.public.len(), 32); // classical key
let plaintext = b"hello classical MLS";
@@ -482,7 +485,7 @@ mod tests {
info,
aad,
plaintext,
);
).unwrap();
assert!(!ct.kem_output.as_slice().is_empty());
let decrypted = crypto
@@ -501,7 +504,7 @@ mod tests {
#[test]
fn key_package_generation_with_hybrid_provider() {
use openmls::prelude::{
Credential, CredentialType, CredentialWithKey, CryptoConfig, KeyPackage,
BasicCredential, CredentialWithKey, KeyPackage,
};
use std::sync::Arc;
use tls_codec::Serialize;
@@ -514,26 +517,24 @@ mod tests {
let provider = HybridCryptoProvider::default();
let identity = Arc::new(IdentityKeypair::generate());
let credential = Credential::new(
identity.public_key_bytes().to_vec(),
CredentialType::Basic,
)
.unwrap();
let credential: openmls::prelude::Credential =
BasicCredential::new(identity.public_key_bytes().to_vec()).into();
let credential_with_key = CredentialWithKey {
credential,
signature_key: identity.public_key_bytes().to_vec().into(),
};
let key_package = KeyPackage::builder()
let key_package_bundle = KeyPackage::builder()
.build(
CryptoConfig::with_default_version(CIPHERSUITE),
CIPHERSUITE,
&provider,
identity.as_ref(),
credential_with_key,
)
.expect("KeyPackage with hybrid HPKE");
let bytes = key_package
let bytes = key_package_bundle
.key_package()
.tls_serialize_detached()
.expect("serialize KeyPackage");
assert!(!bytes.is_empty());

View File

@@ -90,7 +90,7 @@ impl IdentityKeypair {
/// `openmls_basic_credential` crate.
#[cfg(feature = "native")]
impl openmls_traits::signatures::Signer for IdentityKeypair {
fn sign(&self, payload: &[u8]) -> Result<Vec<u8>, openmls_traits::types::Error> {
fn sign(&self, payload: &[u8]) -> Result<Vec<u8>, openmls_traits::signatures::SignerError> {
let sk = self.signing_key();
let sig: ed25519_dalek::Signature = sk.sign(payload);
Ok(sig.to_bytes().to_vec())

View File

@@ -14,18 +14,18 @@
//! # Wire format
//!
//! KeyPackages are TLS-encoded using `tls_codec` (same version as openmls).
//! The resulting bytes are opaque to the quicproquo transport layer.
//! The resulting bytes are opaque to the quicprochat transport layer.
use openmls::prelude::{
Ciphersuite, Credential, CredentialType, CredentialWithKey, CryptoConfig, KeyPackage,
KeyPackageIn, TlsDeserializeTrait, TlsSerializeTrait,
BasicCredential, Ciphersuite, CredentialWithKey, KeyPackage, KeyPackageIn,
};
use openmls_rust_crypto::OpenMlsRustCrypto;
use tls_codec::{Deserialize as TlsDeserializeTrait, Serialize as TlsSerializeTrait};
use sha2::{Digest, Sha256};
use crate::{error::CoreError, identity::IdentityKeypair};
/// The MLS ciphersuite used throughout quicproquo (RFC 9420 §17.1).
/// The MLS ciphersuite used throughout quicprochat (RFC 9420 §17.1).
pub const ALLOWED_CIPHERSUITE: Ciphersuite =
Ciphersuite::MLS_128_DHKEMX25519_AES128GCM_SHA256_Ed25519;
@@ -74,8 +74,8 @@ pub fn generate_key_package(identity: &IdentityKeypair) -> Result<(Vec<u8>, Vec<
// Build a BasicCredential using the raw Ed25519 public key bytes as the
// MLS identity. Per RFC 9420, any byte string may serve as the identity.
let credential = Credential::new(identity.public_key_bytes().to_vec(), CredentialType::Basic)
.map_err(|e| CoreError::Mls(format!("{e:?}")))?;
let credential: openmls::prelude::Credential =
BasicCredential::new(identity.public_key_bytes().to_vec()).into();
// The `signature_key` in CredentialWithKey is the Ed25519 public key that
// will be used to verify the KeyPackage's leaf node signature.
@@ -87,19 +87,13 @@ pub fn generate_key_package(identity: &IdentityKeypair) -> Result<(Vec<u8>, Vec<
// `IdentityKeypair` implements `openmls_traits::signatures::Signer`
// so it can be passed directly to the builder.
let key_package = KeyPackage::builder()
.build(
CryptoConfig::with_default_version(CIPHERSUITE),
&backend,
identity,
credential_with_key,
)
let key_package_bundle = KeyPackage::builder()
.build(CIPHERSUITE, &backend, identity, credential_with_key)
.map_err(|e| CoreError::Mls(format!("{e:?}")))?;
// TLS-encode the KeyPackage using the trait from the openmls prelude.
// This uses tls_codec 0.3 (the same version openmls uses internally),
// avoiding a duplicate-trait conflict with tls_codec 0.4.
let tls_bytes = key_package
// TLS-encode the KeyPackage.
let tls_bytes = key_package_bundle
.key_package()
.tls_serialize_detached()
.map_err(|e| CoreError::Mls(format!("{e:?}")))?;

View File

@@ -0,0 +1,713 @@
use std::{
fs,
path::{Path, PathBuf},
};
use openmls_memory_storage::MemoryStorage;
use openmls_traits::storage::{traits, StorageProvider, CURRENT_VERSION};
/// A disk-backed storage provider implementing `StorageProvider`.
///
/// Wraps `openmls_memory_storage::MemoryStorage` and flushes to disk on every
/// write so that HPKE init keys and group state survive process restarts.
///
/// # Serialization
///
/// Uses bincode for the outer `HashMap<Vec<u8>, Vec<u8>>` container when
/// persisting to disk. The inner values use serde_json (matching
/// `MemoryStorage`'s serialization format).
///
/// # Persistence security
///
/// When `path` is set, file permissions are restricted to owner-only (0o600)
/// on Unix platforms, since the store may contain HPKE private keys.
#[derive(Debug)]
pub struct DiskKeyStore {
path: Option<PathBuf>,
storage: MemoryStorage,
}
#[derive(thiserror::Error, Debug)]
pub enum DiskKeyStoreError {
#[error("serialization error")]
Serialization,
#[error("io error: {0}")]
Io(String),
#[error("memory storage error: {0}")]
MemoryStorage(#[from] openmls_memory_storage::MemoryStorageError),
}
impl DiskKeyStore {
/// In-memory keystore (no persistence).
pub fn ephemeral() -> Self {
Self {
path: None,
storage: MemoryStorage::default(),
}
}
/// Persistent keystore backed by `path`. Creates an empty store if missing.
pub fn persistent(path: impl AsRef<Path>) -> Result<Self, DiskKeyStoreError> {
let path = path.as_ref().to_path_buf();
let storage = if path.exists() {
let bytes = fs::read(&path).map_err(|e| DiskKeyStoreError::Io(e.to_string()))?;
if bytes.is_empty() {
MemoryStorage::default()
} else {
let map: std::collections::HashMap<Vec<u8>, Vec<u8>> =
bincode::deserialize(&bytes)
.map_err(|_| DiskKeyStoreError::Serialization)?;
let storage = MemoryStorage::default();
let mut values = storage.values.write()
.map_err(|_| DiskKeyStoreError::Io("lock poisoned".into()))?;
*values = map;
drop(values);
storage
}
} else {
MemoryStorage::default()
};
let store = Self {
path: Some(path),
storage,
};
// Set restrictive file permissions on the keystore file.
store.set_file_permissions()?;
Ok(store)
}
fn flush(&self) -> Result<(), DiskKeyStoreError> {
let Some(path) = &self.path else {
return Ok(());
};
let values = self.storage.values.read()
.map_err(|_| DiskKeyStoreError::Io("lock poisoned".into()))?;
let bytes = bincode::serialize(&*values)
.map_err(|_| DiskKeyStoreError::Serialization)?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).map_err(|e| DiskKeyStoreError::Io(e.to_string()))?;
}
fs::write(path, &bytes).map_err(|e| DiskKeyStoreError::Io(e.to_string()))?;
self.set_file_permissions()?;
Ok(())
}
/// Serialize the backing storage to bytes (bincode).
///
/// This captures all key material *and* MLS group state held by the
/// `StorageProvider`, allowing the caller to persist it in a database
/// column instead of (or in addition to) on-disk files.
pub fn to_bytes(&self) -> Result<Vec<u8>, DiskKeyStoreError> {
let values = self.storage.values.read()
.map_err(|_| DiskKeyStoreError::Io("lock poisoned".into()))?;
bincode::serialize(&*values).map_err(|_| DiskKeyStoreError::Serialization)
}
/// Restore a `DiskKeyStore` from bytes previously produced by [`to_bytes`].
pub fn from_bytes(bytes: &[u8]) -> Result<Self, DiskKeyStoreError> {
let map: std::collections::HashMap<Vec<u8>, Vec<u8>> =
bincode::deserialize(bytes).map_err(|_| DiskKeyStoreError::Serialization)?;
let storage = MemoryStorage::default();
let mut values = storage.values.write()
.map_err(|_| DiskKeyStoreError::Io("lock poisoned".into()))?;
*values = map;
drop(values);
Ok(Self {
path: None,
storage,
})
}
/// Restrict file permissions to owner-only (0o600) on Unix.
#[cfg(unix)]
fn set_file_permissions(&self) -> Result<(), DiskKeyStoreError> {
use std::os::unix::fs::PermissionsExt;
if let Some(path) = &self.path {
if path.exists() {
let perms = std::fs::Permissions::from_mode(0o600);
fs::set_permissions(path, perms)
.map_err(|e| DiskKeyStoreError::Io(format!("set permissions: {e}")))?;
}
}
Ok(())
}
#[cfg(not(unix))]
fn set_file_permissions(&self) -> Result<(), DiskKeyStoreError> {
Ok(())
}
}
impl Default for DiskKeyStore {
fn default() -> Self {
Self::ephemeral()
}
}
/// Delegate all `StorageProvider` methods to the inner `MemoryStorage`,
/// flushing to disk after every write/delete operation.
///
/// The flush errors are mapped to `DiskKeyStoreError` via the
/// `MemoryStorageError` conversion. If a flush fails, the in-memory state
/// is still updated (matching the old DiskKeyStore behavior).
impl StorageProvider<CURRENT_VERSION> for DiskKeyStore {
type Error = DiskKeyStoreError;
fn write_mls_join_config<
GroupId: traits::GroupId<CURRENT_VERSION>,
MlsGroupJoinConfig: traits::MlsGroupJoinConfig<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
config: &MlsGroupJoinConfig,
) -> Result<(), Self::Error> {
self.storage.write_mls_join_config(group_id, config)?;
self.flush()
}
fn append_own_leaf_node<
GroupId: traits::GroupId<CURRENT_VERSION>,
LeafNode: traits::LeafNode<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
leaf_node: &LeafNode,
) -> Result<(), Self::Error> {
self.storage.append_own_leaf_node(group_id, leaf_node)?;
self.flush()
}
fn queue_proposal<
GroupId: traits::GroupId<CURRENT_VERSION>,
ProposalRef: traits::ProposalRef<CURRENT_VERSION>,
QueuedProposal: traits::QueuedProposal<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
proposal_ref: &ProposalRef,
proposal: &QueuedProposal,
) -> Result<(), Self::Error> {
self.storage.queue_proposal(group_id, proposal_ref, proposal)?;
self.flush()
}
fn write_tree<
GroupId: traits::GroupId<CURRENT_VERSION>,
TreeSync: traits::TreeSync<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
tree: &TreeSync,
) -> Result<(), Self::Error> {
self.storage.write_tree(group_id, tree)?;
self.flush()
}
fn write_interim_transcript_hash<
GroupId: traits::GroupId<CURRENT_VERSION>,
InterimTranscriptHash: traits::InterimTranscriptHash<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
interim_transcript_hash: &InterimTranscriptHash,
) -> Result<(), Self::Error> {
self.storage.write_interim_transcript_hash(group_id, interim_transcript_hash)?;
self.flush()
}
fn write_context<
GroupId: traits::GroupId<CURRENT_VERSION>,
GroupContext: traits::GroupContext<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
group_context: &GroupContext,
) -> Result<(), Self::Error> {
self.storage.write_context(group_id, group_context)?;
self.flush()
}
fn write_confirmation_tag<
GroupId: traits::GroupId<CURRENT_VERSION>,
ConfirmationTag: traits::ConfirmationTag<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
confirmation_tag: &ConfirmationTag,
) -> Result<(), Self::Error> {
self.storage.write_confirmation_tag(group_id, confirmation_tag)?;
self.flush()
}
fn write_group_state<
GroupState: traits::GroupState<CURRENT_VERSION>,
GroupId: traits::GroupId<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
group_state: &GroupState,
) -> Result<(), Self::Error> {
self.storage.write_group_state(group_id, group_state)?;
self.flush()
}
fn write_message_secrets<
GroupId: traits::GroupId<CURRENT_VERSION>,
MessageSecrets: traits::MessageSecrets<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
message_secrets: &MessageSecrets,
) -> Result<(), Self::Error> {
self.storage.write_message_secrets(group_id, message_secrets)?;
self.flush()
}
fn write_resumption_psk_store<
GroupId: traits::GroupId<CURRENT_VERSION>,
ResumptionPskStore: traits::ResumptionPskStore<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
resumption_psk_store: &ResumptionPskStore,
) -> Result<(), Self::Error> {
self.storage.write_resumption_psk_store(group_id, resumption_psk_store)?;
self.flush()
}
fn write_own_leaf_index<
GroupId: traits::GroupId<CURRENT_VERSION>,
LeafNodeIndex: traits::LeafNodeIndex<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
own_leaf_index: &LeafNodeIndex,
) -> Result<(), Self::Error> {
self.storage.write_own_leaf_index(group_id, own_leaf_index)?;
self.flush()
}
fn write_group_epoch_secrets<
GroupId: traits::GroupId<CURRENT_VERSION>,
GroupEpochSecrets: traits::GroupEpochSecrets<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
group_epoch_secrets: &GroupEpochSecrets,
) -> Result<(), Self::Error> {
self.storage.write_group_epoch_secrets(group_id, group_epoch_secrets)?;
self.flush()
}
fn write_signature_key_pair<
SignaturePublicKey: traits::SignaturePublicKey<CURRENT_VERSION>,
SignatureKeyPair: traits::SignatureKeyPair<CURRENT_VERSION>,
>(
&self,
public_key: &SignaturePublicKey,
signature_key_pair: &SignatureKeyPair,
) -> Result<(), Self::Error> {
self.storage.write_signature_key_pair(public_key, signature_key_pair)?;
self.flush()
}
fn write_encryption_key_pair<
EncryptionKey: traits::EncryptionKey<CURRENT_VERSION>,
HpkeKeyPair: traits::HpkeKeyPair<CURRENT_VERSION>,
>(
&self,
public_key: &EncryptionKey,
key_pair: &HpkeKeyPair,
) -> Result<(), Self::Error> {
self.storage.write_encryption_key_pair(public_key, key_pair)?;
self.flush()
}
fn write_encryption_epoch_key_pairs<
GroupId: traits::GroupId<CURRENT_VERSION>,
EpochKey: traits::EpochKey<CURRENT_VERSION>,
HpkeKeyPair: traits::HpkeKeyPair<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
epoch: &EpochKey,
leaf_index: u32,
key_pairs: &[HpkeKeyPair],
) -> Result<(), Self::Error> {
self.storage.write_encryption_epoch_key_pairs(group_id, epoch, leaf_index, key_pairs)?;
self.flush()
}
fn write_key_package<
HashReference: traits::HashReference<CURRENT_VERSION>,
KeyPackage: traits::KeyPackage<CURRENT_VERSION>,
>(
&self,
hash_ref: &HashReference,
key_package: &KeyPackage,
) -> Result<(), Self::Error> {
self.storage.write_key_package(hash_ref, key_package)?;
self.flush()
}
fn write_psk<
PskId: traits::PskId<CURRENT_VERSION>,
PskBundle: traits::PskBundle<CURRENT_VERSION>,
>(
&self,
psk_id: &PskId,
psk: &PskBundle,
) -> Result<(), Self::Error> {
self.storage.write_psk(psk_id, psk)?;
self.flush()
}
// --- getters (no flush needed) ---
fn mls_group_join_config<
GroupId: traits::GroupId<CURRENT_VERSION>,
MlsGroupJoinConfig: traits::MlsGroupJoinConfig<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<MlsGroupJoinConfig>, Self::Error> {
Ok(self.storage.mls_group_join_config(group_id)?)
}
fn own_leaf_nodes<
GroupId: traits::GroupId<CURRENT_VERSION>,
LeafNode: traits::LeafNode<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Vec<LeafNode>, Self::Error> {
Ok(self.storage.own_leaf_nodes(group_id)?)
}
fn queued_proposal_refs<
GroupId: traits::GroupId<CURRENT_VERSION>,
ProposalRef: traits::ProposalRef<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Vec<ProposalRef>, Self::Error> {
Ok(self.storage.queued_proposal_refs(group_id)?)
}
fn queued_proposals<
GroupId: traits::GroupId<CURRENT_VERSION>,
ProposalRef: traits::ProposalRef<CURRENT_VERSION>,
QueuedProposal: traits::QueuedProposal<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Vec<(ProposalRef, QueuedProposal)>, Self::Error> {
Ok(self.storage.queued_proposals(group_id)?)
}
fn tree<
GroupId: traits::GroupId<CURRENT_VERSION>,
TreeSync: traits::TreeSync<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<TreeSync>, Self::Error> {
Ok(self.storage.tree(group_id)?)
}
fn group_context<
GroupId: traits::GroupId<CURRENT_VERSION>,
GroupContext: traits::GroupContext<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<GroupContext>, Self::Error> {
Ok(self.storage.group_context(group_id)?)
}
fn interim_transcript_hash<
GroupId: traits::GroupId<CURRENT_VERSION>,
InterimTranscriptHash: traits::InterimTranscriptHash<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<InterimTranscriptHash>, Self::Error> {
Ok(self.storage.interim_transcript_hash(group_id)?)
}
fn confirmation_tag<
GroupId: traits::GroupId<CURRENT_VERSION>,
ConfirmationTag: traits::ConfirmationTag<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<ConfirmationTag>, Self::Error> {
Ok(self.storage.confirmation_tag(group_id)?)
}
fn group_state<
GroupState: traits::GroupState<CURRENT_VERSION>,
GroupId: traits::GroupId<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<GroupState>, Self::Error> {
Ok(self.storage.group_state(group_id)?)
}
fn message_secrets<
GroupId: traits::GroupId<CURRENT_VERSION>,
MessageSecrets: traits::MessageSecrets<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<MessageSecrets>, Self::Error> {
Ok(self.storage.message_secrets(group_id)?)
}
fn resumption_psk_store<
GroupId: traits::GroupId<CURRENT_VERSION>,
ResumptionPskStore: traits::ResumptionPskStore<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<ResumptionPskStore>, Self::Error> {
Ok(self.storage.resumption_psk_store(group_id)?)
}
fn own_leaf_index<
GroupId: traits::GroupId<CURRENT_VERSION>,
LeafNodeIndex: traits::LeafNodeIndex<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<LeafNodeIndex>, Self::Error> {
Ok(self.storage.own_leaf_index(group_id)?)
}
fn group_epoch_secrets<
GroupId: traits::GroupId<CURRENT_VERSION>,
GroupEpochSecrets: traits::GroupEpochSecrets<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<Option<GroupEpochSecrets>, Self::Error> {
Ok(self.storage.group_epoch_secrets(group_id)?)
}
fn signature_key_pair<
SignaturePublicKey: traits::SignaturePublicKey<CURRENT_VERSION>,
SignatureKeyPair: traits::SignatureKeyPair<CURRENT_VERSION>,
>(
&self,
public_key: &SignaturePublicKey,
) -> Result<Option<SignatureKeyPair>, Self::Error> {
Ok(self.storage.signature_key_pair(public_key)?)
}
fn encryption_key_pair<
HpkeKeyPair: traits::HpkeKeyPair<CURRENT_VERSION>,
EncryptionKey: traits::EncryptionKey<CURRENT_VERSION>,
>(
&self,
public_key: &EncryptionKey,
) -> Result<Option<HpkeKeyPair>, Self::Error> {
Ok(self.storage.encryption_key_pair(public_key)?)
}
fn encryption_epoch_key_pairs<
GroupId: traits::GroupId<CURRENT_VERSION>,
EpochKey: traits::EpochKey<CURRENT_VERSION>,
HpkeKeyPair: traits::HpkeKeyPair<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
epoch: &EpochKey,
leaf_index: u32,
) -> Result<Vec<HpkeKeyPair>, Self::Error> {
Ok(self.storage.encryption_epoch_key_pairs(group_id, epoch, leaf_index)?)
}
fn key_package<
KeyPackageRef: traits::HashReference<CURRENT_VERSION>,
KeyPackage: traits::KeyPackage<CURRENT_VERSION>,
>(
&self,
hash_ref: &KeyPackageRef,
) -> Result<Option<KeyPackage>, Self::Error> {
Ok(self.storage.key_package(hash_ref)?)
}
fn psk<
PskBundle: traits::PskBundle<CURRENT_VERSION>,
PskId: traits::PskId<CURRENT_VERSION>,
>(
&self,
psk_id: &PskId,
) -> Result<Option<PskBundle>, Self::Error> {
Ok(self.storage.psk(psk_id)?)
}
// --- deleters (flush needed) ---
fn remove_proposal<
GroupId: traits::GroupId<CURRENT_VERSION>,
ProposalRef: traits::ProposalRef<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
proposal_ref: &ProposalRef,
) -> Result<(), Self::Error> {
self.storage.remove_proposal(group_id, proposal_ref)?;
self.flush()
}
fn delete_own_leaf_nodes<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_own_leaf_nodes(group_id)?;
self.flush()
}
fn delete_group_config<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_group_config(group_id)?;
self.flush()
}
fn delete_tree<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_tree(group_id)?;
self.flush()
}
fn delete_confirmation_tag<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_confirmation_tag(group_id)?;
self.flush()
}
fn delete_group_state<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_group_state(group_id)?;
self.flush()
}
fn delete_context<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_context(group_id)?;
self.flush()
}
fn delete_interim_transcript_hash<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_interim_transcript_hash(group_id)?;
self.flush()
}
fn delete_message_secrets<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_message_secrets(group_id)?;
self.flush()
}
fn delete_all_resumption_psk_secrets<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_all_resumption_psk_secrets(group_id)?;
self.flush()
}
fn delete_own_leaf_index<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_own_leaf_index(group_id)?;
self.flush()
}
fn delete_group_epoch_secrets<GroupId: traits::GroupId<CURRENT_VERSION>>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.delete_group_epoch_secrets(group_id)?;
self.flush()
}
fn clear_proposal_queue<
GroupId: traits::GroupId<CURRENT_VERSION>,
ProposalRef: traits::ProposalRef<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
) -> Result<(), Self::Error> {
self.storage.clear_proposal_queue::<GroupId, ProposalRef>(group_id)?;
self.flush()
}
fn delete_signature_key_pair<
SignaturePublicKey: traits::SignaturePublicKey<CURRENT_VERSION>,
>(
&self,
public_key: &SignaturePublicKey,
) -> Result<(), Self::Error> {
self.storage.delete_signature_key_pair(public_key)?;
self.flush()
}
fn delete_encryption_key_pair<EncryptionKey: traits::EncryptionKey<CURRENT_VERSION>>(
&self,
public_key: &EncryptionKey,
) -> Result<(), Self::Error> {
self.storage.delete_encryption_key_pair(public_key)?;
self.flush()
}
fn delete_encryption_epoch_key_pairs<
GroupId: traits::GroupId<CURRENT_VERSION>,
EpochKey: traits::EpochKey<CURRENT_VERSION>,
>(
&self,
group_id: &GroupId,
epoch: &EpochKey,
leaf_index: u32,
) -> Result<(), Self::Error> {
self.storage.delete_encryption_epoch_key_pairs(group_id, epoch, leaf_index)?;
self.flush()
}
fn delete_key_package<KeyPackageRef: traits::HashReference<CURRENT_VERSION>>(
&self,
hash_ref: &KeyPackageRef,
) -> Result<(), Self::Error> {
self.storage.delete_key_package(hash_ref)?;
self.flush()
}
fn delete_psk<PskKey: traits::PskId<CURRENT_VERSION>>(
&self,
psk_id: &PskKey,
) -> Result<(), Self::Error> {
self.storage.delete_psk(psk_id)?;
self.flush()
}
}

View File

@@ -1,5 +1,5 @@
//! Core cryptographic primitives, MLS group state machine, and hybrid
//! post-quantum KEM for quicproquo.
//! post-quantum KEM for quicprochat.
//!
//! # WASM support
//!

View File

@@ -5,7 +5,7 @@
use opaque_ke::CipherSuite;
/// OPAQUE cipher suite for quicproquo.
/// OPAQUE cipher suite for quicprochat.
///
/// - **OPRF**: Ristretto255 (curve25519-based, ~128-bit security)
/// - **Key exchange**: Triple-DH (3DH) over Ristretto255 with SHA-512

View File

@@ -48,7 +48,7 @@ use zeroize::Zeroizing;
use crate::error::CoreError;
/// Domain separation label for the hybrid Noise handshake.
const PROTOCOL_NAME: &[u8] = b"quicproquo-pq-noise-v1";
const PROTOCOL_NAME: &[u8] = b"quicprochat-pq-noise-v1";
/// ML-KEM-768 encapsulation key length.
const MLKEM_EK_LEN: usize = 1184;

View File

@@ -91,10 +91,10 @@ fn generate_code(rng: &mut impl RngCore) -> String {
}
/// Derive a 32-byte recovery token from a code (used for server-side lookup).
/// The token is `SHA-256("qpq-recovery-token:" || code)`.
/// The token is `SHA-256("qpc-recovery-token:" || code)`.
fn derive_recovery_token(code: &str) -> [u8; 32] {
let mut hasher = Sha256::new();
hasher.update(b"qpq-recovery-token:");
hasher.update(b"qpc-recovery-token:");
hasher.update(code.as_bytes());
hasher.finalize().into()
}
@@ -206,7 +206,7 @@ pub fn recover_from_bundle(
/// Compute the token hash for a recovery code (for server-side lookup).
///
/// This is `SHA-256(SHA-256("qpq-recovery-token:" || code))`.
/// This is `SHA-256(SHA-256("qpc-recovery-token:" || code))`.
pub fn recovery_token_hash(code: &str) -> Vec<u8> {
let token = derive_recovery_token(code);
Sha256::digest(token).to_vec()

View File

@@ -7,7 +7,7 @@
//! 1. Sort the keys lexicographically so the result is symmetric.
//! 2. Concatenate: `input = key_lo || key_hi` (64 bytes).
//! 3. Compute HMAC-SHA256(key=info, data=input) where
//! `info = b"quicproquo-safety-number-v1"`.
//! `info = b"quicprochat-safety-number-v1"`.
//! 4. Iterate the HMAC 5200 times: `hash = HMAC-SHA256(key=info, data=hash)`.
//! 5. Interpret the 32-byte result as 4× 64-bit big-endian integers
//! (= 256 bits → 4 groups of 64 bits). Extract 3 decimal groups per
@@ -23,7 +23,7 @@ use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
/// Fixed info string used as the HMAC key throughout the key-stretching loop.
const INFO: &[u8] = b"quicproquo-safety-number-v1";
const INFO: &[u8] = b"quicprochat-safety-number-v1";
/// Compute a 60-digit safety number from two 32-byte Ed25519 public keys.
///

View File

@@ -1,9 +1,10 @@
[package]
name = "quicproquo-kt"
name = "quicprochat-kt"
version = "0.1.0"
edition = "2021"
edition.workspace = true
description = "Key Transparency: append-only SHA-256 Merkle log for (username, identity_key) bindings."
license = "MIT"
license = "Apache-2.0 OR MIT"
repository.workspace = true
[lints]
workspace = true

View File

@@ -0,0 +1,50 @@
[package]
name = "quicprochat-p2p"
version = "0.1.0"
edition.workspace = true
description = "P2P transport layer for quicprochat using iroh."
license = "Apache-2.0 OR MIT"
repository.workspace = true
[features]
traffic-resistance = []
[lints]
workspace = true
[dependencies]
iroh = "0.96"
tokio = { version = "1", features = ["macros", "rt-multi-thread", "time", "sync", "net", "io-util"] }
async-trait = "0.1"
tracing = "0.1"
anyhow = "1"
# Mesh identity & store-and-forward
quicprochat-core = { path = "../quicprochat-core", default-features = false }
serde = { workspace = true }
serde_json = { workspace = true }
ciborium = { workspace = true }
sha2 = { workspace = true }
hex = { workspace = true }
# Broadcast channels (ChaCha20-Poly1305 symmetric encryption)
chacha20poly1305 = { workspace = true }
rand = { workspace = true }
zeroize = { workspace = true }
# Lightweight mesh link handshake (X25519 ECDH + HKDF)
x25519-dalek = { workspace = true }
hkdf = { workspace = true }
thiserror = { workspace = true }
# Configuration
toml = "0.8"
humantime-serde = "1"
[dev-dependencies]
tempfile = "3"
meshservice = { path = "../meshservice" }
[[example]]
name = "fapp_demo"
path = "../../examples/fapp_demo.rs"

View File

@@ -0,0 +1,96 @@
//! Simulated mesh leg: **A (LoRa)** → **B (LoRa + TCP relay)** → **C (TCP)** → zurück über B → **A**.
//!
//! Uses [`quicprochat_p2p::transport_lora::LoRaMockMedium`] — keine Hardware.
//!
//! ```text
//! Node A Node B Node C
//! LoRa addr 0x01 LoRa 0x02 + TCP listen TCP (WiFi / LAN)
//! │ │ │
//! └──── LoRa ───────┘ │
//! └──────── TCP ──────────────┘
//! ```
//!
//! Run: `cargo run -p quicprochat-p2p --example mesh_lora_relay_demo`
use std::sync::Arc;
use std::time::Duration;
use quicprochat_p2p::transport::{MeshTransport, TransportAddr};
use quicprochat_p2p::transport_lora::{DutyCycleTracker, LoRaConfig, LoRaMockMedium};
use quicprochat_p2p::transport_tcp::TcpTransport;
const ADDR_A: [u8; 4] = [0x01, 0, 0, 0];
const ADDR_B: [u8; 4] = [0x02, 0, 0, 0];
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let medium = LoRaMockMedium::new();
let duty = Arc::new(DutyCycleTracker::new(3_600_000));
let lora_a = medium
.connect(ADDR_A, LoRaConfig::default(), Arc::clone(&duty))
.await?;
let lora_b = medium
.connect(ADDR_B, LoRaConfig::default(), Arc::clone(&duty))
.await?;
let tcp_b = TcpTransport::bind("127.0.0.1:0").await?;
let tcp_c = TcpTransport::bind("127.0.0.1:0").await?;
let c_listen = tcp_c.local_addr();
let b_listen = tcp_b.local_addr();
let c_addr = TransportAddr::Socket(c_listen);
let b_addr = TransportAddr::Socket(b_listen);
println!(
"LoRa mock mesh demo: B relays LoRa <-> TCP (B TCP {}, C TCP {})",
b_listen, c_listen
);
let relay = tokio::spawn(async move {
for _ in 0..2 {
tokio::select! {
p = lora_b.recv() => {
let p = p.expect("B LoRa recv");
println!("B: LoRa from {} -> TCP ({} bytes)", p.from, p.data.len());
tcp_b.send(&c_addr, &p.data).await.expect("B TCP send to C");
}
p = tcp_b.recv() => {
let p = p.expect("B TCP recv");
println!("B: TCP -> LoRa A ({} bytes)", p.data.len());
lora_b
.send(&TransportAddr::LoRa(ADDR_A), &p.data)
.await
.expect("B LoRa send to A");
}
}
}
});
let c_task = tokio::spawn(async move {
let pkt = tcp_c.recv().await.expect("C TCP recv");
println!("C: got {} bytes from B relay", pkt.data.len());
assert_eq!(pkt.data, b"hello via mesh");
tcp_c
.send(&b_addr, b"ack from C")
.await
.expect("C TCP send");
});
tokio::time::sleep(Duration::from_millis(50)).await;
lora_a
.send(&TransportAddr::LoRa(ADDR_B), b"hello via mesh")
.await?;
let reply = lora_a.recv().await?;
println!("A: LoRa reply {} bytes", reply.data.len());
assert_eq!(reply.data, b"ack from C");
c_task.await.expect("node C task panicked");
relay.await.expect("relay task panicked");
lora_a.close().await.ok();
println!("Done: LoRa + TCP relay path OK.");
Ok(())
}

View File

@@ -0,0 +1,135 @@
//! Truncated mesh addresses for bandwidth-efficient routing.
//!
//! A [`MeshAddress`] is derived from an Ed25519 public key by taking the first
//! 16 bytes of its SHA-256 hash. This provides globally unique addressing
//! (birthday collision at ~2^64) while saving 16 bytes per packet compared to
//! full 32-byte public keys.
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::fmt;
/// 16-byte truncated mesh address.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct MeshAddress([u8; 16]);
impl MeshAddress {
/// Derive from a 32-byte Ed25519 public key.
pub fn from_public_key(key: &[u8; 32]) -> Self {
let hash = Sha256::digest(key);
let mut addr = [0u8; 16];
addr.copy_from_slice(&hash[..16]);
Self(addr)
}
/// Create from raw 16-byte array.
pub fn from_bytes(bytes: [u8; 16]) -> Self {
Self(bytes)
}
/// Get the raw 16-byte address.
pub fn as_bytes(&self) -> &[u8; 16] {
&self.0
}
/// Check if a 32-byte public key matches this address.
pub fn matches_key(&self, key: &[u8; 32]) -> bool {
Self::from_public_key(key) == *self
}
/// The broadcast address (all zeros).
pub const BROADCAST: Self = Self([0u8; 16]);
/// Check if this is the broadcast address.
pub fn is_broadcast(&self) -> bool {
self.0 == [0u8; 16]
}
}
impl fmt::Debug for MeshAddress {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "MeshAddress({})", hex::encode(self.0))
}
}
impl fmt::Display for MeshAddress {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", hex::encode(&self.0[..8]))
}
}
impl From<[u8; 16]> for MeshAddress {
fn from(bytes: [u8; 16]) -> Self {
Self(bytes)
}
}
impl AsRef<[u8; 16]> for MeshAddress {
fn as_ref(&self) -> &[u8; 16] {
&self.0
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn from_key_deterministic() {
let key = [42u8; 32];
let addr1 = MeshAddress::from_public_key(&key);
let addr2 = MeshAddress::from_public_key(&key);
assert_eq!(addr1, addr2, "same key must produce same address");
}
#[test]
fn different_keys_different_addresses() {
let key_a = [1u8; 32];
let key_b = [2u8; 32];
let addr_a = MeshAddress::from_public_key(&key_a);
let addr_b = MeshAddress::from_public_key(&key_b);
assert_ne!(addr_a, addr_b, "different keys must produce different addresses");
}
#[test]
fn matches_key_works() {
let key = [99u8; 32];
let addr = MeshAddress::from_public_key(&key);
assert!(addr.matches_key(&key), "correct key must match");
let wrong_key = [100u8; 32];
assert!(!addr.matches_key(&wrong_key), "wrong key must not match");
}
#[test]
fn broadcast_address() {
assert_eq!(*MeshAddress::BROADCAST.as_bytes(), [0u8; 16]);
assert!(MeshAddress::BROADCAST.is_broadcast());
let non_broadcast = MeshAddress::from_bytes([1u8; 16]);
assert!(!non_broadcast.is_broadcast());
}
#[test]
fn display_formatting() {
let key = [0xAB; 32];
let addr = MeshAddress::from_public_key(&key);
let display = format!("{addr}");
// Display shows first 8 bytes as hex = 16 hex chars.
assert_eq!(display.len(), 16, "display should show 8 bytes = 16 hex chars");
let debug = format!("{addr:?}");
// Debug shows all 16 bytes as hex = 32 hex chars, plus wrapper.
assert!(debug.starts_with("MeshAddress("));
assert!(debug.ends_with(')'));
}
#[test]
fn serde_roundtrip() {
let key = [77u8; 32];
let addr = MeshAddress::from_public_key(&key);
let json = serde_json::to_string(&addr).expect("serialize");
let restored: MeshAddress = serde_json::from_str(&json).expect("deserialize");
assert_eq!(addr, restored);
}
}

View File

@@ -0,0 +1,316 @@
//! Mesh announce protocol for self-organizing network discovery.
//!
//! Nodes periodically broadcast signed [`MeshAnnounce`] packets. These propagate
//! through the mesh, building each node's [`RoutingTable`](crate::routing_table::RoutingTable).
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::time::{SystemTime, UNIX_EPOCH};
use crate::identity::MeshIdentity;
/// Capability flag: node can relay messages for others.
pub const CAP_RELAY: u16 = 0x0001;
/// Capability flag: node has store-and-forward.
pub const CAP_STORE: u16 = 0x0002;
/// Capability flag: node is connected to Internet/server.
pub const CAP_GATEWAY: u16 = 0x0004;
/// Capability flag: node is on a low-bandwidth transport only.
pub const CAP_CONSTRAINED: u16 = 0x0008;
/// Capability flag: node has KeyPackages available for MLS group invites.
pub const CAP_MLS_READY: u16 = 0x0010;
/// A signed mesh node announcement.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct MeshAnnounce {
/// Ed25519 public key of the announcing node (32 bytes).
pub identity_key: Vec<u8>,
/// Truncated address: SHA-256(identity_key)[0..16] — used for routing.
pub address: [u8; 16],
/// Capability bitfield.
pub capabilities: u16,
/// Monotonically increasing sequence number (per node).
pub sequence: u64,
/// Unix timestamp of creation.
pub timestamp: u64,
/// Transports this node is reachable on: Vec<(transport_name, serialized_addr)>.
pub reachable_via: Vec<(String, Vec<u8>)>,
/// Current hop count (incremented on re-broadcast).
pub hop_count: u8,
/// Maximum propagation hops.
pub max_hops: u8,
/// Optional hash of current KeyPackage (SHA-256, truncated to 8 bytes).
/// Present when CAP_MLS_READY is set. Peers can request the full KeyPackage.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub keypackage_hash: Option<[u8; 8]>,
/// Ed25519 signature over all fields except signature and hop_count.
pub signature: Vec<u8>,
}
/// Compute the 16-byte mesh address from an Ed25519 public key.
///
/// The address is the first 16 bytes of SHA-256(identity_key).
pub fn compute_address(identity_key: &[u8]) -> [u8; 16] {
let hash = Sha256::digest(identity_key);
let mut addr = [0u8; 16];
addr.copy_from_slice(&hash[..16]);
addr
}
/// Compute the 8-byte truncated hash of a KeyPackage for announce inclusion.
///
/// This hash is used to identify which KeyPackage version a node has available.
pub fn compute_keypackage_hash(keypackage_bytes: &[u8]) -> [u8; 8] {
let hash = Sha256::digest(keypackage_bytes);
let mut kp_hash = [0u8; 8];
kp_hash.copy_from_slice(&hash[..8]);
kp_hash
}
impl MeshAnnounce {
/// Create and sign a new mesh announcement.
pub fn new(
identity: &MeshIdentity,
capabilities: u16,
reachable_via: Vec<(String, Vec<u8>)>,
max_hops: u8,
) -> Self {
Self::with_keypackage(identity, capabilities, reachable_via, max_hops, None)
}
/// Create announcement with an optional KeyPackage hash.
pub fn with_keypackage(
identity: &MeshIdentity,
capabilities: u16,
reachable_via: Vec<(String, Vec<u8>)>,
max_hops: u8,
keypackage_hash: Option<[u8; 8]>,
) -> Self {
let identity_key = identity.public_key().to_vec();
let address = compute_address(&identity_key);
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let mut announce = Self {
identity_key,
address,
capabilities,
sequence: 0,
timestamp,
reachable_via,
hop_count: 0,
max_hops,
keypackage_hash,
signature: Vec::new(),
};
let signable = announce.signable_bytes();
announce.signature = identity.sign(&signable).to_vec();
announce
}
/// Create and sign with a specific sequence number.
pub fn with_sequence(
identity: &MeshIdentity,
capabilities: u16,
reachable_via: Vec<(String, Vec<u8>)>,
max_hops: u8,
sequence: u64,
) -> Self {
let mut announce = Self::new(identity, capabilities, reachable_via, max_hops);
announce.sequence = sequence;
// Re-sign with the correct sequence number.
let signable = announce.signable_bytes();
announce.signature = identity.sign(&signable).to_vec();
announce
}
/// Assemble the byte string that is signed / verified.
///
/// `hop_count` and `signature` are excluded: forwarding nodes increment
/// hop_count without re-signing (same design as [`MeshEnvelope`]).
fn signable_bytes(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(
self.identity_key.len() + 16 + 2 + 8 + 8 + self.reachable_via.len() * 32 + 1 + 9,
);
buf.extend_from_slice(&self.identity_key);
buf.extend_from_slice(&self.address);
buf.extend_from_slice(&self.capabilities.to_le_bytes());
buf.extend_from_slice(&self.sequence.to_le_bytes());
buf.extend_from_slice(&self.timestamp.to_le_bytes());
for (name, addr) in &self.reachable_via {
buf.extend_from_slice(name.as_bytes());
buf.extend_from_slice(addr);
}
buf.push(self.max_hops);
// Include keypackage_hash in signature if present
if let Some(kp_hash) = &self.keypackage_hash {
buf.push(1); // presence marker
buf.extend_from_slice(kp_hash);
} else {
buf.push(0); // absence marker
}
buf
}
/// Verify the Ed25519 signature on this announcement.
pub fn verify(&self) -> bool {
let identity_key: [u8; 32] = match self.identity_key.as_slice().try_into() {
Ok(k) => k,
Err(_) => return false,
};
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = self.signable_bytes();
quicprochat_core::IdentityKeypair::verify_raw(&identity_key, &signable, &sig).is_ok()
}
/// Check whether this announce has expired relative to a maximum age.
pub fn is_expired(&self, max_age_secs: u64) -> bool {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
now.saturating_sub(self.timestamp) > max_age_secs
}
/// Create a forwarded copy with `hop_count` incremented by one.
///
/// The signature remains the original — forwarding nodes do not re-sign.
pub fn forwarded(&self) -> Self {
let mut copy = self.clone();
copy.hop_count = copy.hop_count.saturating_add(1);
copy
}
/// Whether this announce can still propagate (under hop limit and not expired).
///
/// Uses a generous default max age of 1800 seconds (30 minutes) for the
/// expiry check. Callers that need a different max age should check
/// [`is_expired`](Self::is_expired) separately.
pub fn can_propagate(&self) -> bool {
self.hop_count < self.max_hops && !self.is_expired(1800)
}
/// Serialize to compact CBOR binary format (for wire transmission).
pub fn to_wire(&self) -> Vec<u8> {
let mut buf = Vec::new();
ciborium::into_writer(self, &mut buf).expect("CBOR serialization should not fail");
buf
}
/// Deserialize from CBOR binary format.
pub fn from_wire(bytes: &[u8]) -> anyhow::Result<Self> {
let announce: Self = ciborium::from_reader(bytes)?;
Ok(announce)
}
}
#[cfg(test)]
mod tests {
use super::*;
fn test_identity() -> MeshIdentity {
MeshIdentity::generate()
}
#[test]
fn create_and_verify() {
let id = test_identity();
let announce = MeshAnnounce::new(
&id,
CAP_RELAY | CAP_STORE,
vec![("tcp".into(), b"127.0.0.1:9000".to_vec())],
8,
);
assert!(announce.verify(), "freshly created announce must verify");
assert_eq!(announce.hop_count, 0);
assert_eq!(announce.identity_key, id.public_key().to_vec());
assert_eq!(announce.capabilities, CAP_RELAY | CAP_STORE);
assert_eq!(announce.max_hops, 8);
}
#[test]
fn tampered_fails_verify() {
let id = test_identity();
let mut announce = MeshAnnounce::new(&id, CAP_RELAY, vec![], 4);
announce.capabilities = CAP_GATEWAY; // tamper
assert!(
!announce.verify(),
"tampered announce must fail verification"
);
}
#[test]
fn forwarded_still_verifies() {
let id = test_identity();
let announce = MeshAnnounce::new(&id, CAP_RELAY, vec![], 8);
assert!(announce.verify());
let fwd = announce.forwarded();
assert_eq!(fwd.hop_count, 1);
assert!(
fwd.verify(),
"forwarded announce must still verify (hop_count excluded from signature)"
);
let fwd2 = fwd.forwarded();
assert_eq!(fwd2.hop_count, 2);
assert!(fwd2.verify(), "double-forwarded must still verify");
}
#[test]
fn expired_announce() {
let id = test_identity();
let mut announce = MeshAnnounce::new(&id, 0, vec![], 4);
// Set timestamp far in the past.
announce.timestamp = 0;
assert!(announce.is_expired(60), "announce from epoch should be expired with 60s max age");
}
#[test]
fn address_from_key_deterministic() {
let key = [42u8; 32];
let addr1 = compute_address(&key);
let addr2 = compute_address(&key);
assert_eq!(addr1, addr2, "same key must produce same address");
// Different key produces different address.
let other_key = [99u8; 32];
let other_addr = compute_address(&other_key);
assert_ne!(addr1, other_addr);
}
#[test]
fn cbor_roundtrip() {
let id = test_identity();
let announce = MeshAnnounce::new(
&id,
CAP_RELAY | CAP_GATEWAY,
vec![
("tcp".into(), b"127.0.0.1:9000".to_vec()),
("lora".into(), vec![0x01, 0x02, 0x03, 0x04]),
],
6,
);
let wire = announce.to_wire();
let restored = MeshAnnounce::from_wire(&wire).expect("CBOR deserialize");
assert_eq!(announce.identity_key, restored.identity_key);
assert_eq!(announce.address, restored.address);
assert_eq!(announce.capabilities, restored.capabilities);
assert_eq!(announce.sequence, restored.sequence);
assert_eq!(announce.timestamp, restored.timestamp);
assert_eq!(announce.reachable_via, restored.reachable_via);
assert_eq!(announce.hop_count, restored.hop_count);
assert_eq!(announce.max_hops, restored.max_hops);
assert_eq!(announce.signature, restored.signature);
assert!(restored.verify());
}
}

View File

@@ -0,0 +1,302 @@
//! Announce protocol engine — sends, receives, and propagates mesh announcements.
//!
//! This module ties together [`MeshAnnounce`], [`RoutingTable`], and
//! deduplication logic to form a complete announce processing pipeline.
use std::collections::HashSet;
use std::time::Duration;
use crate::announce::MeshAnnounce;
use crate::identity::MeshIdentity;
use crate::routing_table::RoutingTable;
use crate::transport::TransportAddr;
/// Configuration for the announce protocol.
#[derive(Clone, Debug)]
pub struct AnnounceConfig {
/// Interval between periodic re-announcements.
pub announce_interval: Duration,
/// Maximum age before an announce is considered expired.
pub max_announce_age: Duration,
/// Maximum hops for announce propagation.
pub max_hops: u8,
/// This node's capabilities.
pub capabilities: u16,
/// Interval for routing table garbage collection.
pub gc_interval: Duration,
}
impl Default for AnnounceConfig {
fn default() -> Self {
Self {
announce_interval: Duration::from_secs(600), // 10 minutes
max_announce_age: Duration::from_secs(1800), // 30 minutes
max_hops: 8,
capabilities: 0,
gc_interval: Duration::from_secs(60),
}
}
}
/// Tracks which announces we've already seen (to prevent re-broadcast loops).
pub struct AnnounceDedup {
/// Set of (address, sequence) pairs we've seen.
seen: HashSet<([u8; 16], u64)>,
/// Maximum entries before pruning.
max_entries: usize,
}
impl AnnounceDedup {
/// Create a new dedup tracker with the given capacity.
pub fn new(max_entries: usize) -> Self {
Self {
seen: HashSet::new(),
max_entries,
}
}
/// Check if this announce is new (not seen before).
///
/// Returns `true` if the (address, sequence) pair has not been seen before,
/// and adds it to the set. Returns `false` if it was already seen.
pub fn is_new(&mut self, address: &[u8; 16], sequence: u64) -> bool {
if self.seen.len() >= self.max_entries {
self.prune();
}
self.seen.insert((*address, sequence))
}
/// Remove all entries when the set exceeds capacity.
///
/// Uses a simple clear-all strategy; a more sophisticated implementation
/// could track insertion order and evict oldest entries.
pub fn prune(&mut self) {
self.seen.clear();
}
}
/// Create this node's own mesh announcement.
pub fn create_announce(
identity: &MeshIdentity,
config: &AnnounceConfig,
sequence: u64,
reachable_via: Vec<(String, Vec<u8>)>,
) -> MeshAnnounce {
MeshAnnounce::with_sequence(
identity,
config.capabilities,
reachable_via,
config.max_hops,
sequence,
)
}
/// Process a received mesh announcement.
///
/// Steps:
/// 1. Verify signature — return `None` if invalid.
/// 2. Check if expired — return `None` if stale.
/// 3. Check dedup — return `None` if already seen.
/// 4. Update routing table.
/// 5. If `can_propagate` — return `Some(forwarded)` for re-broadcast.
/// 6. Otherwise return `None`.
pub fn process_received_announce(
announce: &MeshAnnounce,
routing_table: &mut RoutingTable,
dedup: &mut AnnounceDedup,
received_via: &str,
received_from: TransportAddr,
max_age: Duration,
) -> Option<MeshAnnounce> {
// 1. Verify signature.
if !announce.verify() {
return None;
}
// 2. Check expiry.
if announce.is_expired(max_age.as_secs()) {
return None;
}
// 3. Dedup check.
if !dedup.is_new(&announce.address, announce.sequence) {
return None;
}
// 4. Update routing table.
routing_table.update(announce, received_via, received_from);
// 5. Check if the announce can propagate further.
if announce.hop_count < announce.max_hops && !announce.is_expired(max_age.as_secs()) {
Some(announce.forwarded())
} else {
None
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::announce::CAP_RELAY;
use crate::identity::MeshIdentity;
fn test_identity() -> MeshIdentity {
MeshIdentity::generate()
}
fn default_config() -> AnnounceConfig {
AnnounceConfig {
capabilities: CAP_RELAY,
..AnnounceConfig::default()
}
}
#[test]
fn create_announce_is_valid() {
let id = test_identity();
let config = default_config();
let announce = create_announce(
&id,
&config,
1,
vec![("tcp".into(), b"127.0.0.1:9000".to_vec())],
);
assert!(announce.verify());
assert_eq!(announce.sequence, 1);
assert_eq!(announce.capabilities, CAP_RELAY);
assert_eq!(announce.max_hops, 8);
assert_eq!(announce.hop_count, 0);
}
#[test]
fn process_valid_announce_updates_table() {
let id = test_identity();
let config = default_config();
let announce = create_announce(&id, &config, 1, vec![]);
let mut table = RoutingTable::new(Duration::from_secs(300));
let mut dedup = AnnounceDedup::new(1000);
let addr = TransportAddr::Socket("127.0.0.1:9000".parse().unwrap());
let result = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr,
Duration::from_secs(1800),
);
// Should propagate (hop_count 0 < max_hops 8).
assert!(result.is_some());
// Routing table should have the entry.
assert_eq!(table.len(), 1);
}
#[test]
fn process_duplicate_ignored() {
let id = test_identity();
let config = default_config();
let announce = create_announce(&id, &config, 1, vec![]);
let mut table = RoutingTable::new(Duration::from_secs(300));
let mut dedup = AnnounceDedup::new(1000);
let addr = TransportAddr::Socket("127.0.0.1:9000".parse().unwrap());
// First time — accepted.
let result1 = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr.clone(),
Duration::from_secs(1800),
);
assert!(result1.is_some());
// Second time — duplicate, ignored.
let result2 = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr,
Duration::from_secs(1800),
);
assert!(result2.is_none());
}
#[test]
fn process_expired_ignored() {
let id = test_identity();
let config = default_config();
let mut announce = create_announce(&id, &config, 1, vec![]);
// Set timestamp far in the past.
announce.timestamp = 0;
let mut table = RoutingTable::new(Duration::from_secs(300));
let mut dedup = AnnounceDedup::new(1000);
let addr = TransportAddr::Socket("127.0.0.1:9000".parse().unwrap());
let result = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr,
Duration::from_secs(60),
);
assert!(result.is_none(), "expired announce must be ignored");
assert!(table.is_empty());
}
#[test]
fn process_invalid_sig_ignored() {
let id = test_identity();
let config = default_config();
let mut announce = create_announce(&id, &config, 1, vec![]);
// Tamper with capabilities to invalidate signature.
announce.capabilities = 0xFFFF;
let mut table = RoutingTable::new(Duration::from_secs(300));
let mut dedup = AnnounceDedup::new(1000);
let addr = TransportAddr::Socket("127.0.0.1:9000".parse().unwrap());
let result = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr,
Duration::from_secs(1800),
);
assert!(result.is_none(), "tampered announce must be ignored");
assert!(table.is_empty());
}
#[test]
fn process_returns_forwarded_for_propagation() {
let id = test_identity();
let config = default_config();
let announce = create_announce(&id, &config, 1, vec![]);
assert_eq!(announce.hop_count, 0);
let mut table = RoutingTable::new(Duration::from_secs(300));
let mut dedup = AnnounceDedup::new(1000);
let addr = TransportAddr::Socket("127.0.0.1:9000".parse().unwrap());
let result = process_received_announce(
&announce,
&mut table,
&mut dedup,
"tcp",
addr,
Duration::from_secs(1800),
);
let forwarded = result.expect("should return forwarded announce");
assert_eq!(forwarded.hop_count, 1);
assert!(forwarded.verify(), "forwarded announce must still verify");
}
}

View File

@@ -0,0 +1,460 @@
//! Runtime configuration for mesh networking.
//!
//! This module provides centralized configuration with sensible defaults
//! and validation. Configuration can be loaded from files, environment
//! variables, or set programmatically.
use std::path::PathBuf;
use std::time::Duration;
use serde::{Deserialize, Serialize};
use crate::error::{ConfigError, MeshResult};
use crate::transport::CryptoMode;
/// Top-level mesh node configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct MeshConfig {
/// Node identity configuration.
pub identity: IdentityConfig,
/// Announce protocol configuration.
pub announce: AnnounceConfig,
/// Routing configuration.
pub routing: RoutingConfig,
/// Store-and-forward configuration.
pub store: StoreConfig,
/// Transport configuration.
pub transport: TransportConfig,
/// Crypto configuration.
pub crypto: CryptoConfig,
/// Rate limiting configuration.
pub rate_limit: RateLimitConfig,
/// Logging configuration.
pub logging: LoggingConfig,
}
impl Default for MeshConfig {
fn default() -> Self {
Self {
identity: IdentityConfig::default(),
announce: AnnounceConfig::default(),
routing: RoutingConfig::default(),
store: StoreConfig::default(),
transport: TransportConfig::default(),
crypto: CryptoConfig::default(),
rate_limit: RateLimitConfig::default(),
logging: LoggingConfig::default(),
}
}
}
impl MeshConfig {
/// Load configuration from a TOML file.
pub fn from_file(path: &PathBuf) -> MeshResult<Self> {
let content = std::fs::read_to_string(path).map_err(|e| {
ConfigError::Parse(format!("failed to read config file: {}", e))
})?;
Self::from_toml(&content)
}
/// Parse configuration from TOML string.
pub fn from_toml(toml: &str) -> MeshResult<Self> {
let config: Self = toml::from_str(toml).map_err(|e| {
ConfigError::Parse(format!("TOML parse error: {}", e))
})?;
config.validate()?;
Ok(config)
}
/// Serialize to TOML string.
pub fn to_toml(&self) -> MeshResult<String> {
toml::to_string_pretty(self).map_err(|e| {
ConfigError::Parse(format!("TOML serialize error: {}", e)).into()
})
}
/// Validate configuration values.
pub fn validate(&self) -> MeshResult<()> {
self.announce.validate()?;
self.routing.validate()?;
self.store.validate()?;
self.rate_limit.validate()?;
Ok(())
}
/// Create a minimal config for constrained devices.
pub fn constrained() -> Self {
Self {
store: StoreConfig {
max_messages: 100,
max_keypackages: 50,
..Default::default()
},
routing: RoutingConfig {
max_entries: 100,
..Default::default()
},
announce: AnnounceConfig {
interval: Duration::from_secs(1800), // 30 min
..Default::default()
},
crypto: CryptoConfig {
default_mode: CryptoMode::MlsLiteUnsigned,
..Default::default()
},
..Default::default()
}
}
}
/// Identity configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct IdentityConfig {
/// Path to persist identity keypair.
pub keypair_path: Option<PathBuf>,
/// Whether to auto-generate keypair if missing.
pub auto_generate: bool,
}
impl Default for IdentityConfig {
fn default() -> Self {
Self {
keypair_path: None,
auto_generate: true,
}
}
}
/// Announce protocol configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct AnnounceConfig {
/// Interval between periodic announcements.
#[serde(with = "humantime_serde")]
pub interval: Duration,
/// Maximum age before announce is considered stale.
#[serde(with = "humantime_serde")]
pub max_age: Duration,
/// Maximum propagation hops.
pub max_hops: u8,
/// Capabilities to advertise.
pub capabilities: u16,
/// Whether to include KeyPackage hash in announces.
pub include_keypackage: bool,
}
impl Default for AnnounceConfig {
fn default() -> Self {
Self {
interval: Duration::from_secs(600), // 10 min
max_age: Duration::from_secs(1800), // 30 min
max_hops: 8,
capabilities: 0x0003, // CAP_RELAY | CAP_STORE
include_keypackage: true,
}
}
}
impl AnnounceConfig {
fn validate(&self) -> MeshResult<()> {
if self.interval < Duration::from_secs(10) {
return Err(ConfigError::InvalidValue {
key: "announce.interval".to_string(),
reason: "must be at least 10 seconds".to_string(),
}.into());
}
if self.max_hops == 0 || self.max_hops > 32 {
return Err(ConfigError::InvalidValue {
key: "announce.max_hops".to_string(),
reason: "must be between 1 and 32".to_string(),
}.into());
}
Ok(())
}
}
/// Routing configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct RoutingConfig {
/// Maximum routing table entries.
pub max_entries: usize,
/// Default route TTL.
#[serde(with = "humantime_serde")]
pub default_ttl: Duration,
/// How often to garbage collect expired routes.
#[serde(with = "humantime_serde")]
pub gc_interval: Duration,
}
impl Default for RoutingConfig {
fn default() -> Self {
Self {
max_entries: 10_000,
default_ttl: Duration::from_secs(1800), // 30 min
gc_interval: Duration::from_secs(60),
}
}
}
impl RoutingConfig {
fn validate(&self) -> MeshResult<()> {
if self.max_entries == 0 {
return Err(ConfigError::InvalidValue {
key: "routing.max_entries".to_string(),
reason: "must be at least 1".to_string(),
}.into());
}
Ok(())
}
}
/// Store-and-forward configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct StoreConfig {
/// Maximum messages in store.
pub max_messages: usize,
/// Maximum messages per recipient.
pub max_per_recipient: usize,
/// Maximum cached KeyPackages.
pub max_keypackages: usize,
/// Maximum KeyPackages per address.
pub max_keypackages_per_addr: usize,
/// Default message TTL.
#[serde(with = "humantime_serde")]
pub default_ttl: Duration,
/// Path for persistent storage (None = in-memory only).
pub persistence_path: Option<PathBuf>,
}
impl Default for StoreConfig {
fn default() -> Self {
Self {
max_messages: 10_000,
max_per_recipient: 100,
max_keypackages: 1_000,
max_keypackages_per_addr: 3,
default_ttl: Duration::from_secs(24 * 3600), // 24 hours
persistence_path: None,
}
}
}
impl StoreConfig {
fn validate(&self) -> MeshResult<()> {
if self.max_messages == 0 {
return Err(ConfigError::InvalidValue {
key: "store.max_messages".to_string(),
reason: "must be at least 1".to_string(),
}.into());
}
Ok(())
}
}
/// Transport configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct TransportConfig {
/// Enable iroh/QUIC transport.
pub enable_iroh: bool,
/// Enable TCP transport.
pub enable_tcp: bool,
/// TCP listen address.
pub tcp_listen: Option<String>,
/// Enable LoRa transport.
pub enable_lora: bool,
/// LoRa device path (e.g., /dev/ttyUSB0).
pub lora_device: Option<String>,
/// LoRa spreading factor (7-12).
pub lora_sf: u8,
/// LoRa bandwidth in kHz.
pub lora_bw: u32,
/// Connection timeout.
#[serde(with = "humantime_serde")]
pub connect_timeout: Duration,
/// Send timeout.
#[serde(with = "humantime_serde")]
pub send_timeout: Duration,
}
impl Default for TransportConfig {
fn default() -> Self {
Self {
enable_iroh: true,
enable_tcp: true,
tcp_listen: None,
enable_lora: false,
lora_device: None,
lora_sf: 10,
lora_bw: 125,
connect_timeout: Duration::from_secs(10),
send_timeout: Duration::from_secs(30),
}
}
}
/// Crypto configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct CryptoConfig {
/// Default crypto mode.
pub default_mode: CryptoMode,
/// Whether to auto-upgrade to better crypto when available.
pub auto_upgrade: bool,
/// Whether to sign MLS-Lite messages.
pub mls_lite_sign: bool,
/// Enable post-quantum hybrid mode.
pub enable_pq: bool,
}
impl Default for CryptoConfig {
fn default() -> Self {
Self {
default_mode: CryptoMode::MlsClassical,
auto_upgrade: true,
mls_lite_sign: true,
enable_pq: false, // PQ is large, opt-in
}
}
}
/// Rate limiting configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct RateLimitConfig {
/// Maximum announces per peer per minute.
pub announce_per_peer_per_min: u32,
/// Maximum messages per peer per minute.
pub message_per_peer_per_min: u32,
/// Maximum KeyPackage requests per minute.
pub keypackage_requests_per_min: u32,
/// LoRa duty cycle limit (0.0-1.0, e.g., 0.01 = 1%).
pub lora_duty_cycle: f32,
}
impl Default for RateLimitConfig {
fn default() -> Self {
Self {
announce_per_peer_per_min: 10,
message_per_peer_per_min: 60,
keypackage_requests_per_min: 20,
lora_duty_cycle: 0.01, // EU868 1% default
}
}
}
impl RateLimitConfig {
fn validate(&self) -> MeshResult<()> {
if self.lora_duty_cycle < 0.0 || self.lora_duty_cycle > 1.0 {
return Err(ConfigError::InvalidValue {
key: "rate_limit.lora_duty_cycle".to_string(),
reason: "must be between 0.0 and 1.0".to_string(),
}.into());
}
Ok(())
}
}
/// Logging configuration.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct LoggingConfig {
/// Log level (trace, debug, info, warn, error).
pub level: String,
/// Whether to log to file.
pub file: Option<PathBuf>,
/// Whether to include timestamps.
pub timestamps: bool,
/// Whether to include span context.
pub spans: bool,
}
impl Default for LoggingConfig {
fn default() -> Self {
Self {
level: "info".to_string(),
file: None,
timestamps: true,
spans: false,
}
}
}
// Serde helper for CryptoMode
impl Serialize for CryptoMode {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let s = match self {
CryptoMode::MlsHybrid => "mls-hybrid",
CryptoMode::MlsClassical => "mls-classical",
CryptoMode::MlsLiteSigned => "mls-lite-signed",
CryptoMode::MlsLiteUnsigned => "mls-lite-unsigned",
};
serializer.serialize_str(s)
}
}
impl<'de> Deserialize<'de> for CryptoMode {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
match s.as_str() {
"mls-hybrid" => Ok(CryptoMode::MlsHybrid),
"mls-classical" => Ok(CryptoMode::MlsClassical),
"mls-lite-signed" => Ok(CryptoMode::MlsLiteSigned),
"mls-lite-unsigned" => Ok(CryptoMode::MlsLiteUnsigned),
_ => Err(serde::de::Error::unknown_variant(
&s,
&["mls-hybrid", "mls-classical", "mls-lite-signed", "mls-lite-unsigned"],
)),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn default_config_is_valid() {
let config = MeshConfig::default();
assert!(config.validate().is_ok());
}
#[test]
fn constrained_config_is_valid() {
let config = MeshConfig::constrained();
assert!(config.validate().is_ok());
assert_eq!(config.store.max_messages, 100);
}
#[test]
fn toml_roundtrip() {
let config = MeshConfig::default();
let toml = config.to_toml().expect("serialize");
let restored = MeshConfig::from_toml(&toml).expect("parse");
assert_eq!(config.announce.max_hops, restored.announce.max_hops);
}
#[test]
fn invalid_announce_interval() {
let mut config = MeshConfig::default();
config.announce.interval = Duration::from_secs(1); // Too short
assert!(config.validate().is_err());
}
#[test]
fn invalid_duty_cycle() {
let mut config = MeshConfig::default();
config.rate_limit.lora_duty_cycle = 2.0; // > 1.0
assert!(config.validate().is_err());
}
}

View File

@@ -0,0 +1,337 @@
//! Crypto mode negotiation and upgrade path.
//!
//! This module handles transitions between crypto modes based on transport
//! capability. Groups can upgrade from MLS-Lite to full MLS when a
//! higher-bandwidth transport becomes available.
//!
//! # Upgrade Path
//!
//! ```text
//! MLS-Lite (constrained) → Full MLS (when high-bandwidth available)
//!
//! 1. Group running MLS-Lite over LoRa
//! 2. Member connects via WiFi/QUIC
//! 3. Member sends MLS KeyPackage over fast link
//! 4. Creator imports MLS-Lite members into MLS group
//! 5. Sends MLS Welcome + epoch secret derivation
//! 6. Group transitions to full MLS (can still use LoRa for app messages)
//! ```
//!
//! # Security Considerations
//!
//! - Upgrade requires re-keying (new epoch in MLS)
//! - Cannot downgrade without explicit action (security property)
//! - MLS-Lite epoch secret can be derived from MLS export
use crate::mls_lite::MlsLiteGroup;
use crate::transport::{CryptoMode, TransportCapability};
/// State of a group's crypto negotiation.
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum GroupCryptoState {
/// Group uses MLS-Lite with pre-shared key.
MlsLite {
group_id: [u8; 8],
epoch: u16,
signed: bool,
},
/// Group uses full MLS.
FullMls {
group_id: Vec<u8>,
epoch: u64,
hybrid_pq: bool,
},
/// Group is upgrading from MLS-Lite to full MLS.
Upgrading {
lite_group_id: [u8; 8],
lite_epoch: u16,
mls_group_id: Vec<u8>,
},
}
impl GroupCryptoState {
/// Current crypto mode.
pub fn mode(&self) -> CryptoMode {
match self {
Self::MlsLite { signed: true, .. } => CryptoMode::MlsLiteSigned,
Self::MlsLite { signed: false, .. } => CryptoMode::MlsLiteUnsigned,
Self::FullMls { hybrid_pq: true, .. } => CryptoMode::MlsHybrid,
Self::FullMls { hybrid_pq: false, .. } => CryptoMode::MlsClassical,
Self::Upgrading { .. } => CryptoMode::MlsClassical, // Upgrading assumes MLS available
}
}
/// Check if upgrade to full MLS is possible.
pub fn can_upgrade(&self, available_capability: TransportCapability) -> bool {
match self {
Self::MlsLite { .. } => available_capability.supports_mls(),
Self::FullMls { hybrid_pq: false, .. } => {
// Can upgrade from classical MLS to hybrid if unconstrained
available_capability == TransportCapability::Unconstrained
}
_ => false,
}
}
/// Check if this state supports the given transport capability.
pub fn compatible_with(&self, capability: TransportCapability) -> bool {
match self {
Self::MlsLite { .. } => true, // MLS-Lite works on all transports
Self::FullMls { hybrid_pq: true, .. } => {
capability == TransportCapability::Unconstrained
}
Self::FullMls { hybrid_pq: false, .. } => capability.supports_mls(),
Self::Upgrading { .. } => capability.supports_mls(),
}
}
}
/// Parameters for deriving MLS-Lite key from MLS epoch secret.
///
/// This enables bootstrapping MLS-Lite from an existing MLS group.
#[derive(Clone, Debug)]
pub struct MlsLiteBootstrap {
/// MLS group ID (for domain separation).
pub mls_group_id: Vec<u8>,
/// MLS epoch from which to derive.
pub mls_epoch: u64,
/// Label for HKDF derivation.
pub label: &'static str,
}
impl MlsLiteBootstrap {
/// Standard label for MLS-Lite derivation.
pub const LABEL: &'static str = "quicprochat-mls-lite-from-mls";
/// Create bootstrap parameters from MLS group state.
pub fn new(mls_group_id: Vec<u8>, mls_epoch: u64) -> Self {
Self {
mls_group_id,
mls_epoch,
label: Self::LABEL,
}
}
/// Derive an MLS-Lite group secret from MLS epoch secret.
///
/// Uses HKDF with the epoch secret as input keying material.
pub fn derive_lite_secret(&self, mls_epoch_secret: &[u8]) -> [u8; 32] {
use hkdf::Hkdf;
use sha2::Sha256;
let salt = b"quicprochat-mls-lite-bootstrap-v1";
let hk = Hkdf::<Sha256>::new(Some(salt), mls_epoch_secret);
let mut info = Vec::with_capacity(self.mls_group_id.len() + 8 + self.label.len());
info.extend_from_slice(&self.mls_group_id);
info.extend_from_slice(&self.mls_epoch.to_be_bytes());
info.extend_from_slice(self.label.as_bytes());
let mut secret = [0u8; 32];
hk.expand(&info, &mut secret)
.expect("HKDF expand should not fail");
secret
}
/// Derive MLS-Lite group ID from MLS group ID.
pub fn derive_lite_group_id(&self) -> [u8; 8] {
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(b"mls-lite-group-id:");
hasher.update(&self.mls_group_id);
hasher.update(&self.mls_epoch.to_be_bytes());
let hash = hasher.finalize();
let mut id = [0u8; 8];
id.copy_from_slice(&hash[..8]);
id
}
}
/// Create an MLS-Lite group derived from MLS epoch secret.
///
/// This enables constrained-link fallback for established MLS groups.
pub fn create_lite_from_mls(
mls_group_id: &[u8],
mls_epoch: u64,
mls_epoch_secret: &[u8],
) -> MlsLiteGroup {
let bootstrap = MlsLiteBootstrap::new(mls_group_id.to_vec(), mls_epoch);
let lite_secret = bootstrap.derive_lite_secret(mls_epoch_secret);
let lite_group_id = bootstrap.derive_lite_group_id();
MlsLiteGroup::new(lite_group_id, &lite_secret, 0)
}
/// Upgrade request message sent when initiating MLS upgrade.
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub struct UpgradeRequest {
/// MLS-Lite group being upgraded.
pub lite_group_id: [u8; 8],
/// Current MLS-Lite epoch.
pub lite_epoch: u16,
/// Requester's MLS KeyPackage.
pub keypackage: Vec<u8>,
}
/// Upgrade response with MLS Welcome for the upgrading member.
#[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
pub struct UpgradeResponse {
/// MLS-Lite group being upgraded.
pub lite_group_id: [u8; 8],
/// New MLS group ID.
pub mls_group_id: Vec<u8>,
/// MLS Welcome message for the requesting member.
pub mls_welcome: Vec<u8>,
/// Derived MLS-Lite secret for constrained links (optional).
/// Allows continued MLS-Lite operation alongside full MLS.
pub derived_lite_secret: Option<[u8; 32]>,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn crypto_state_modes() {
let lite_unsigned = GroupCryptoState::MlsLite {
group_id: [0; 8],
epoch: 0,
signed: false,
};
assert_eq!(lite_unsigned.mode(), CryptoMode::MlsLiteUnsigned);
let lite_signed = GroupCryptoState::MlsLite {
group_id: [0; 8],
epoch: 0,
signed: true,
};
assert_eq!(lite_signed.mode(), CryptoMode::MlsLiteSigned);
let mls_classical = GroupCryptoState::FullMls {
group_id: vec![1, 2, 3],
epoch: 5,
hybrid_pq: false,
};
assert_eq!(mls_classical.mode(), CryptoMode::MlsClassical);
let mls_hybrid = GroupCryptoState::FullMls {
group_id: vec![1, 2, 3],
epoch: 5,
hybrid_pq: true,
};
assert_eq!(mls_hybrid.mode(), CryptoMode::MlsHybrid);
}
#[test]
fn can_upgrade_from_lite() {
let lite = GroupCryptoState::MlsLite {
group_id: [0; 8],
epoch: 0,
signed: true,
};
// Can upgrade with unconstrained transport
assert!(lite.can_upgrade(TransportCapability::Unconstrained));
assert!(lite.can_upgrade(TransportCapability::Medium));
// Cannot upgrade with constrained transport
assert!(!lite.can_upgrade(TransportCapability::Constrained));
assert!(!lite.can_upgrade(TransportCapability::SeverelyConstrained));
}
#[test]
fn can_upgrade_classical_to_hybrid() {
let classical = GroupCryptoState::FullMls {
group_id: vec![1, 2, 3],
epoch: 5,
hybrid_pq: false,
};
assert!(classical.can_upgrade(TransportCapability::Unconstrained));
assert!(!classical.can_upgrade(TransportCapability::Medium));
}
#[test]
fn bootstrap_derivation() {
let mls_group_id = b"test-mls-group".to_vec();
let mls_epoch = 42u64;
let mls_secret = [0x42u8; 32];
let bootstrap = MlsLiteBootstrap::new(mls_group_id.clone(), mls_epoch);
// Secret derivation should be deterministic
let secret1 = bootstrap.derive_lite_secret(&mls_secret);
let secret2 = bootstrap.derive_lite_secret(&mls_secret);
assert_eq!(secret1, secret2);
// Different epoch should give different secret
let bootstrap2 = MlsLiteBootstrap::new(mls_group_id, mls_epoch + 1);
let secret3 = bootstrap2.derive_lite_secret(&mls_secret);
assert_ne!(secret1, secret3);
// Group ID derivation
let lite_id = bootstrap.derive_lite_group_id();
assert_eq!(lite_id.len(), 8);
}
#[test]
fn create_lite_from_mls_works() {
let mls_group_id = b"mls-group-123".to_vec();
let mls_epoch = 10;
let mls_secret = [0xABu8; 32];
let lite_group = create_lite_from_mls(&mls_group_id, mls_epoch, &mls_secret);
// Should be able to encrypt/decrypt
let mut alice = lite_group;
let mut bob = create_lite_from_mls(&mls_group_id, mls_epoch, &mls_secret);
let (ct, nonce, _seq) = alice.encrypt(b"hello from alice").expect("encrypt");
use crate::address::MeshAddress;
let alice_addr = MeshAddress::from_bytes([0xAA; 16]);
match bob.decrypt(&ct, &nonce, alice_addr) {
crate::mls_lite::DecryptResult::Success(pt) => {
assert_eq!(pt, b"hello from alice");
}
other => panic!("expected Success, got {other:?}"),
}
}
#[test]
fn compatibility_check() {
let lite = GroupCryptoState::MlsLite {
group_id: [0; 8],
epoch: 0,
signed: true,
};
// MLS-Lite works on all transports
assert!(lite.compatible_with(TransportCapability::Unconstrained));
assert!(lite.compatible_with(TransportCapability::SeverelyConstrained));
let mls_hybrid = GroupCryptoState::FullMls {
group_id: vec![1],
epoch: 1,
hybrid_pq: true,
};
// PQ-hybrid only works on unconstrained
assert!(mls_hybrid.compatible_with(TransportCapability::Unconstrained));
assert!(!mls_hybrid.compatible_with(TransportCapability::Medium));
let mls_classical = GroupCryptoState::FullMls {
group_id: vec![1],
epoch: 1,
hybrid_pq: false,
};
// Classical MLS works on medium+
assert!(mls_classical.compatible_with(TransportCapability::Unconstrained));
assert!(mls_classical.compatible_with(TransportCapability::Medium));
assert!(!mls_classical.compatible_with(TransportCapability::Constrained));
}
}

View File

@@ -149,7 +149,7 @@ impl MeshEnvelope {
self.max_hops,
self.timestamp,
);
quicproquo_core::IdentityKeypair::verify_raw(&sender_key, &signable, &sig).is_ok()
quicprochat_core::IdentityKeypair::verify_raw(&sender_key, &signable, &sig).is_ok()
}
/// Check whether this envelope has expired (TTL elapsed since timestamp).
@@ -176,13 +176,31 @@ impl MeshEnvelope {
copy
}
/// Serialize to bytes (JSON).
/// Serialize to compact CBOR binary format (for wire transmission).
pub fn to_wire(&self) -> Vec<u8> {
let mut buf = Vec::new();
ciborium::into_writer(self, &mut buf).expect("CBOR serialization should not fail");
buf
}
/// Deserialize from CBOR binary format.
pub fn from_wire(bytes: &[u8]) -> anyhow::Result<Self> {
let env: Self = ciborium::from_reader(bytes)?;
Ok(env)
}
/// Deserialize from wire format, trying CBOR first then JSON fallback.
pub fn from_wire_or_json(bytes: &[u8]) -> anyhow::Result<Self> {
Self::from_wire(bytes).or_else(|_| Self::from_bytes(bytes))
}
/// Serialize to bytes (JSON). Kept for backward compatibility and debugging.
pub fn to_bytes(&self) -> Vec<u8> {
// serde_json::to_vec should not fail on a well-formed envelope.
serde_json::to_vec(self).expect("envelope serialization should not fail")
}
/// Deserialize from bytes (JSON).
/// Deserialize from bytes (JSON). Kept for backward compatibility and debugging.
pub fn from_bytes(bytes: &[u8]) -> anyhow::Result<Self> {
let env: Self = serde_json::from_slice(bytes)?;
Ok(env)
@@ -293,4 +311,128 @@ mod tests {
assert!(env.recipient_key.is_empty());
assert!(env.verify());
}
#[test]
fn cbor_roundtrip() {
let id = test_identity();
let recipient = [0xABu8; 32];
let env = MeshEnvelope::new(&id, &recipient, b"cbor roundtrip".to_vec(), 3600, 5);
let wire = env.to_wire();
let restored = MeshEnvelope::from_wire(&wire).expect("CBOR deserialize");
assert_eq!(env.id, restored.id);
assert_eq!(env.sender_key, restored.sender_key);
assert_eq!(env.recipient_key, restored.recipient_key);
assert_eq!(env.payload, restored.payload);
assert_eq!(env.ttl_secs, restored.ttl_secs);
assert_eq!(env.hop_count, restored.hop_count);
assert_eq!(env.max_hops, restored.max_hops);
assert_eq!(env.timestamp, restored.timestamp);
assert_eq!(env.signature, restored.signature);
assert!(restored.verify());
}
#[test]
fn cbor_smaller_than_json() {
let id = test_identity();
let recipient = [0xCCu8; 32];
let payload = b"a typical chat message for size comparison testing".to_vec();
let env = MeshEnvelope::new(&id, &recipient, payload, 3600, 5);
let wire_len = env.to_wire().len();
let json_len = env.to_bytes().len();
println!("CBOR wire size: {wire_len} bytes");
println!("JSON size: {json_len} bytes");
println!("Ratio: {:.1}x smaller", json_len as f64 / wire_len as f64);
assert!(
json_len * 2 > wire_len * 3,
"CBOR ({wire_len}B) should be materially smaller than JSON ({json_len}B)"
);
}
#[test]
fn cbor_backward_compat() {
let id = test_identity();
let env = MeshEnvelope::new(&id, &[0xDD; 32], b"json compat".to_vec(), 60, 3);
// Serialize as JSON (old format).
let json_bytes = env.to_bytes();
// from_wire_or_json should fall back to JSON parsing.
let restored = MeshEnvelope::from_wire_or_json(&json_bytes)
.expect("from_wire_or_json should handle JSON");
assert_eq!(env.id, restored.id);
assert_eq!(env.payload, restored.payload);
assert!(restored.verify());
}
#[test]
fn cbor_from_wire_rejects_garbage() {
let garbage = [0xFF, 0xFE, 0x00, 0x42, 0x99, 0x01, 0x02, 0x03];
let result = MeshEnvelope::from_wire(&garbage);
assert!(result.is_err(), "garbage input must return Err, not panic");
}
/// Measure MeshEnvelope overhead for various payload sizes.
/// This informs constrained link feasibility planning.
#[test]
fn measure_mesh_envelope_overhead() {
let id = test_identity();
let recipient = [0xAAu8; 32];
println!("=== MeshEnvelope Wire Overhead (CBOR) ===");
// Empty payload
let env_empty = MeshEnvelope::new(&id, &recipient, vec![], 3600, 5);
let wire_empty = env_empty.to_wire();
println!("Payload 0B: wire {} bytes (overhead: {} bytes)", wire_empty.len(), wire_empty.len());
let base_overhead = wire_empty.len();
// 1-byte payload
let env_1 = MeshEnvelope::new(&id, &recipient, vec![0x42], 3600, 5);
let wire_1 = env_1.to_wire();
println!("Payload 1B: wire {} bytes (overhead: {} bytes)", wire_1.len(), wire_1.len() - 1);
// 10-byte payload ("hello mesh")
let env_10 = MeshEnvelope::new(&id, &recipient, b"hello mesh".to_vec(), 3600, 5);
let wire_10 = env_10.to_wire();
println!("Payload 10B: wire {} bytes (overhead: {} bytes)", wire_10.len(), wire_10.len() - 10);
// 50-byte payload
let env_50 = MeshEnvelope::new(&id, &recipient, vec![0x42; 50], 3600, 5);
let wire_50 = env_50.to_wire();
println!("Payload 50B: wire {} bytes (overhead: {} bytes)", wire_50.len(), wire_50.len() - 50);
// 100-byte payload (typical short message)
let env_100 = MeshEnvelope::new(&id, &recipient, vec![0x42; 100], 3600, 5);
let wire_100 = env_100.to_wire();
println!("Payload 100B: wire {} bytes (overhead: {} bytes)", wire_100.len(), wire_100.len() - 100);
// Broadcast (empty recipient) - saves 32 bytes
let env_bc = MeshEnvelope::new(&id, &[], b"broadcast".to_vec(), 3600, 5);
let wire_bc = env_bc.to_wire();
println!("Broadcast 9B: wire {} bytes (no recipient)", wire_bc.len());
println!("\n=== LoRa Feasibility (SF12/BW125, MTU=51 bytes) ===");
println!("Empty envelope: {} fragments", (wire_empty.len() + 50) / 51);
println!("10B payload: {} fragments", (wire_10.len() + 50) / 51);
println!("100B payload: {} fragments", (wire_100.len() + 50) / 51);
// Baseline overhead is fixed fields:
// - id: 32 bytes
// - sender_key: 32 bytes
// - recipient_key: 32 bytes (or 0 for broadcast)
// - signature: 64 bytes
// - ttl_secs: 4 bytes
// - hop_count: 1 byte
// - max_hops: 1 byte
// - timestamp: 8 bytes
// Total fixed: ~174 bytes raw, CBOR adds overhead for field names/types
// Actual measured: ~400+ bytes with CBOR (field names add significant overhead)
assert!(base_overhead < 500, "Base overhead should be under 500 bytes");
assert!(base_overhead > 100, "Base overhead should be over 100 bytes (sanity check)");
}
}

View File

@@ -0,0 +1,440 @@
//! Compact mesh envelope using truncated 16-byte addresses.
//!
//! [`MeshEnvelopeV2`] is a bandwidth-optimized envelope format for constrained
//! links (LoRa, serial). It uses [`MeshAddress`] (16 bytes) instead of full
//! 32-byte public keys, saving 32 bytes per envelope.
//!
//! Full public keys are exchanged during the announce phase and cached in the
//! routing table. The envelope only needs addresses for routing.
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::time::{SystemTime, UNIX_EPOCH};
use crate::address::MeshAddress;
use crate::identity::MeshIdentity;
/// Default maximum hops for mesh forwarding.
const DEFAULT_MAX_HOPS: u8 = 5;
/// Version byte for envelope format detection.
const ENVELOPE_V2_VERSION: u8 = 0x02;
/// Priority levels for mesh routing.
#[derive(Clone, Copy, Debug, PartialEq, Eq, Serialize, Deserialize)]
#[repr(u8)]
pub enum Priority {
/// Lowest priority (announce, telemetry).
Low = 0,
/// Normal priority (regular messages).
Normal = 1,
/// High priority (important messages).
High = 2,
/// Emergency priority (always forwarded first).
Emergency = 3,
}
impl Default for Priority {
fn default() -> Self {
Self::Normal
}
}
impl From<u8> for Priority {
fn from(v: u8) -> Self {
match v {
0 => Self::Low,
1 => Self::Normal,
2 => Self::High,
3 => Self::Emergency,
_ => Self::Normal,
}
}
}
/// Compact mesh envelope with 16-byte truncated addresses.
///
/// # Wire overhead
///
/// - Version: 1 byte
/// - Flags: 1 byte (priority: 2 bits, reserved: 6 bits)
/// - ID: 16 bytes (truncated from 32)
/// - Sender: 16 bytes
/// - Recipient: 16 bytes (or 0 for broadcast)
/// - TTL: 2 bytes (u16, max ~18 hours)
/// - Hop count: 1 byte
/// - Max hops: 1 byte
/// - Timestamp: 4 bytes (u32, seconds since epoch mod 2^32)
/// - Signature: 64 bytes
/// - Payload: variable
///
/// **Total fixed overhead: ~122 bytes** (vs ~174 for V1 with full keys)
/// Savings: ~52 bytes per envelope
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct MeshEnvelopeV2 {
/// Format version (0x02 for V2).
pub version: u8,
/// Flags byte: bits 0-1 = priority, bits 2-7 reserved.
pub flags: u8,
/// 16-byte truncated content ID (for deduplication).
pub id: [u8; 16],
/// 16-byte truncated sender address.
pub sender_addr: MeshAddress,
/// 16-byte truncated recipient address (BROADCAST for all).
pub recipient_addr: MeshAddress,
/// Encrypted payload (opaque to mesh layer).
pub payload: Vec<u8>,
/// Time-to-live in seconds (u16, max 65535 = ~18 hours).
pub ttl_secs: u16,
/// Current hop count.
pub hop_count: u8,
/// Maximum hops before drop.
pub max_hops: u8,
/// Unix timestamp (seconds, truncated to u32).
pub timestamp: u32,
/// Ed25519 signature (64 bytes, stored as Vec for serde compatibility).
pub signature: Vec<u8>,
}
impl MeshEnvelopeV2 {
/// Create and sign a new compact mesh envelope.
pub fn new(
identity: &MeshIdentity,
recipient_addr: MeshAddress,
payload: Vec<u8>,
ttl_secs: u16,
max_hops: u8,
priority: Priority,
) -> Self {
let sender_addr = MeshAddress::from_public_key(&identity.public_key());
let hop_count = 0u8;
let max_hops = if max_hops == 0 { DEFAULT_MAX_HOPS } else { max_hops };
let timestamp = (SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs() & 0xFFFF_FFFF) as u32;
let id = Self::compute_id(
&sender_addr,
&recipient_addr,
&payload,
ttl_secs,
max_hops,
timestamp,
);
let flags = (priority as u8) & 0x03;
let mut envelope = Self {
version: ENVELOPE_V2_VERSION,
flags,
id,
sender_addr,
recipient_addr,
payload,
ttl_secs,
hop_count,
max_hops,
timestamp,
signature: Vec::new(),
};
let signable = envelope.signable_bytes();
let sig = identity.sign(&signable);
envelope.signature = sig.to_vec();
envelope
}
/// Create for broadcast (recipient = all zeros).
pub fn broadcast(
identity: &MeshIdentity,
payload: Vec<u8>,
ttl_secs: u16,
max_hops: u8,
priority: Priority,
) -> Self {
Self::new(identity, MeshAddress::BROADCAST, payload, ttl_secs, max_hops, priority)
}
/// Compute the 16-byte truncated content ID.
fn compute_id(
sender_addr: &MeshAddress,
recipient_addr: &MeshAddress,
payload: &[u8],
ttl_secs: u16,
max_hops: u8,
timestamp: u32,
) -> [u8; 16] {
let mut hasher = Sha256::new();
hasher.update(sender_addr.as_bytes());
hasher.update(recipient_addr.as_bytes());
hasher.update(payload);
hasher.update(ttl_secs.to_le_bytes());
hasher.update([max_hops]);
hasher.update(timestamp.to_le_bytes());
let hash = hasher.finalize();
let mut id = [0u8; 16];
id.copy_from_slice(&hash[..16]);
id
}
/// Bytes to sign/verify (excludes signature and hop_count).
fn signable_bytes(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(64 + self.payload.len());
buf.push(self.version);
buf.push(self.flags);
buf.extend_from_slice(&self.id);
buf.extend_from_slice(self.sender_addr.as_bytes());
buf.extend_from_slice(self.recipient_addr.as_bytes());
buf.extend_from_slice(&self.payload);
buf.extend_from_slice(&self.ttl_secs.to_le_bytes());
buf.push(self.max_hops);
buf.extend_from_slice(&self.timestamp.to_le_bytes());
buf
}
/// Verify the signature using the sender's full public key.
///
/// The caller must have the sender's full key (from announce/routing table).
pub fn verify_with_key(&self, sender_public_key: &[u8; 32]) -> bool {
// First check that the address matches the key
if !self.sender_addr.matches_key(sender_public_key) {
return false;
}
// Signature must be exactly 64 bytes
let sig: [u8; 64] = match self.signature.as_slice().try_into() {
Ok(s) => s,
Err(_) => return false,
};
let signable = self.signable_bytes();
quicprochat_core::IdentityKeypair::verify_raw(sender_public_key, &signable, &sig).is_ok()
}
/// Get the priority level.
pub fn priority(&self) -> Priority {
Priority::from(self.flags & 0x03)
}
/// Check if broadcast (recipient is all zeros).
pub fn is_broadcast(&self) -> bool {
self.recipient_addr.is_broadcast()
}
/// Check if expired.
pub fn is_expired(&self) -> bool {
let now = (SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs() & 0xFFFF_FFFF) as u32;
// Handle u32 wraparound (every ~136 years)
let elapsed = now.wrapping_sub(self.timestamp);
elapsed > self.ttl_secs as u32
}
/// Can this envelope be forwarded?
pub fn can_forward(&self) -> bool {
self.hop_count < self.max_hops && !self.is_expired()
}
/// Create a forwarded copy with hop_count incremented.
pub fn forwarded(&self) -> Self {
let mut copy = self.clone();
copy.hop_count = copy.hop_count.saturating_add(1);
copy
}
/// Serialize to compact CBOR.
pub fn to_wire(&self) -> Vec<u8> {
let mut buf = Vec::new();
ciborium::into_writer(self, &mut buf).expect("CBOR serialization should not fail");
buf
}
/// Deserialize from CBOR.
pub fn from_wire(bytes: &[u8]) -> anyhow::Result<Self> {
let env: Self = ciborium::from_reader(bytes)?;
if env.version != ENVELOPE_V2_VERSION {
anyhow::bail!("unexpected envelope version: {}", env.version);
}
Ok(env)
}
}
#[cfg(test)]
mod tests {
use super::*;
fn test_identity() -> MeshIdentity {
MeshIdentity::generate()
}
#[test]
fn create_and_verify() {
let id = test_identity();
let recipient_key = [0xBBu8; 32];
let recipient_addr = MeshAddress::from_public_key(&recipient_key);
let env = MeshEnvelopeV2::new(
&id,
recipient_addr,
b"hello compact".to_vec(),
3600,
5,
Priority::Normal,
);
assert_eq!(env.version, ENVELOPE_V2_VERSION);
assert_eq!(env.hop_count, 0);
assert!(env.verify_with_key(&id.public_key()));
assert!(!env.is_expired());
assert!(env.can_forward());
}
#[test]
fn broadcast_envelope() {
let id = test_identity();
let env = MeshEnvelopeV2::broadcast(
&id,
b"announcement".to_vec(),
300,
8,
Priority::Low,
);
assert!(env.is_broadcast());
assert_eq!(env.priority(), Priority::Low);
assert!(env.verify_with_key(&id.public_key()));
}
#[test]
fn forwarded_still_verifies() {
let id = test_identity();
let env = MeshEnvelopeV2::new(
&id,
MeshAddress::from_bytes([0xCC; 16]),
b"forward me".to_vec(),
3600,
5,
Priority::High,
);
let fwd = env.forwarded();
assert_eq!(fwd.hop_count, 1);
assert!(fwd.verify_with_key(&id.public_key()));
let fwd2 = fwd.forwarded();
assert_eq!(fwd2.hop_count, 2);
assert!(fwd2.verify_with_key(&id.public_key()));
}
#[test]
fn cbor_roundtrip() {
let id = test_identity();
let env = MeshEnvelopeV2::new(
&id,
MeshAddress::from_bytes([0xDD; 16]),
b"roundtrip test".to_vec(),
1800,
4,
Priority::Emergency,
);
let wire = env.to_wire();
let restored = MeshEnvelopeV2::from_wire(&wire).expect("deserialize");
assert_eq!(env.id, restored.id);
assert_eq!(env.sender_addr, restored.sender_addr);
assert_eq!(env.recipient_addr, restored.recipient_addr);
assert_eq!(env.payload, restored.payload);
assert_eq!(env.ttl_secs, restored.ttl_secs);
assert_eq!(env.hop_count, restored.hop_count);
assert_eq!(env.max_hops, restored.max_hops);
assert_eq!(env.timestamp, restored.timestamp);
assert_eq!(env.signature, restored.signature);
assert_eq!(env.priority(), Priority::Emergency);
}
#[test]
fn measure_v2_overhead() {
let id = test_identity();
let recipient_addr = MeshAddress::from_bytes([0xEE; 16]);
println!("=== MeshEnvelopeV2 Wire Overhead (CBOR) ===");
// Empty payload
let env_empty = MeshEnvelopeV2::new(&id, recipient_addr, vec![], 3600, 5, Priority::Normal);
let wire_empty = env_empty.to_wire();
println!("Payload 0B: wire {} bytes (overhead: {} bytes)", wire_empty.len(), wire_empty.len());
let v2_overhead = wire_empty.len();
// Compare to V1
let v1_env = crate::envelope::MeshEnvelope::new(
&id,
&[0xEE; 32],
vec![],
3600,
5,
);
let v1_wire = v1_env.to_wire();
println!("V1 empty: {} bytes", v1_wire.len());
println!("V2 savings: {} bytes ({:.1}%)",
v1_wire.len() - v2_overhead,
((v1_wire.len() - v2_overhead) as f64 / v1_wire.len() as f64) * 100.0);
// 10-byte payload
let env_10 = MeshEnvelopeV2::new(&id, recipient_addr, b"hello mesh".to_vec(), 3600, 5, Priority::Normal);
let wire_10 = env_10.to_wire();
println!("Payload 10B: wire {} bytes", wire_10.len());
// 100-byte payload
let env_100 = MeshEnvelopeV2::new(&id, recipient_addr, vec![0x42; 100], 3600, 5, Priority::Normal);
let wire_100 = env_100.to_wire();
println!("Payload 100B: wire {} bytes", wire_100.len());
// V2 should be smaller than V1 due to truncated addresses
// With CBOR field names, actual overhead is higher than theoretical minimum
// (~336 bytes for V2 vs ~410 for V1 = ~18% savings)
assert!(v2_overhead < v1_wire.len(), "V2 should be smaller than V1");
let savings_pct = ((v1_wire.len() - v2_overhead) as f64 / v1_wire.len() as f64) * 100.0;
assert!(savings_pct > 10.0, "V2 should save at least 10% vs V1");
println!("Actual V2 savings: {:.1}%", savings_pct);
}
#[test]
fn wrong_key_fails_verification() {
let id = test_identity();
let env = MeshEnvelopeV2::new(
&id,
MeshAddress::from_bytes([0xFF; 16]),
b"verify me".to_vec(),
3600,
5,
Priority::Normal,
);
// Wrong key should fail
let wrong_key = [0x42u8; 32];
assert!(!env.verify_with_key(&wrong_key));
// Correct key should pass
assert!(env.verify_with_key(&id.public_key()));
}
#[test]
fn priority_levels() {
let id = test_identity();
for prio in [Priority::Low, Priority::Normal, Priority::High, Priority::Emergency] {
let env = MeshEnvelopeV2::new(
&id,
MeshAddress::BROADCAST,
b"prio test".to_vec(),
60,
3,
prio,
);
assert_eq!(env.priority(), prio);
}
}
}

View File

@@ -0,0 +1,354 @@
//! Production-ready error types for the mesh P2P layer.
//!
//! This module provides structured error types with context for debugging
//! and recovery. Errors are categorized by subsystem for easier handling.
use std::fmt;
use thiserror::Error;
use crate::address::MeshAddress;
use crate::transport::TransportAddr;
/// Top-level mesh error type.
#[derive(Debug, Error)]
pub enum MeshError {
/// Transport layer errors.
#[error("transport error: {0}")]
Transport(#[from] TransportError),
/// Routing errors.
#[error("routing error: {0}")]
Routing(#[from] RoutingError),
/// Crypto/encryption errors.
#[error("crypto error: {0}")]
Crypto(#[from] CryptoError),
/// Protocol errors (malformed messages, version mismatch).
#[error("protocol error: {0}")]
Protocol(#[from] ProtocolError),
/// Store/cache errors.
#[error("store error: {0}")]
Store(#[from] StoreError),
/// Configuration errors.
#[error("config error: {0}")]
Config(#[from] ConfigError),
/// Internal errors (bugs, invariant violations).
#[error("internal error: {0}")]
Internal(String),
}
/// Transport layer errors.
#[derive(Debug, Error)]
pub enum TransportError {
/// Failed to send data.
#[error("send failed to {dest}: {reason}")]
SendFailed { dest: String, reason: String },
/// Failed to receive data.
#[error("receive failed: {0}")]
ReceiveFailed(String),
/// Connection failed or lost.
#[error("connection to {dest} failed: {reason}")]
ConnectionFailed { dest: String, reason: String },
/// Transport not available.
#[error("transport '{name}' not available")]
NotAvailable { name: String },
/// No transports registered.
#[error("no transports registered")]
NoTransports,
/// MTU exceeded.
#[error("payload {size} bytes exceeds MTU {mtu} bytes")]
MtuExceeded { size: usize, mtu: usize },
/// Duty cycle limit reached.
#[error("duty cycle limit reached: {used_ms}ms used of {limit_ms}ms allowed")]
DutyCycleExceeded { used_ms: u64, limit_ms: u64 },
/// Timeout waiting for response.
#[error("timeout waiting for response from {dest}")]
Timeout { dest: String },
/// I/O error.
#[error("I/O error: {0}")]
Io(#[from] std::io::Error),
}
/// Routing errors.
#[derive(Debug, Error)]
pub enum RoutingError {
/// No route to destination.
#[error("no route to {0}")]
NoRoute(String),
/// Route expired.
#[error("route to {dest} expired (last seen {age_secs}s ago)")]
RouteExpired { dest: String, age_secs: u64 },
/// Too many hops.
#[error("max hops ({max}) exceeded for message to {dest}")]
MaxHopsExceeded { dest: String, max: u8 },
/// Message expired.
#[error("message expired (TTL {ttl_secs}s, age {age_secs}s)")]
MessageExpired { ttl_secs: u32, age_secs: u64 },
/// Duplicate message (dedup).
#[error("duplicate message ID {0}")]
Duplicate(String),
/// Routing table full.
#[error("routing table full ({capacity} entries)")]
TableFull { capacity: usize },
}
/// Crypto/encryption errors.
#[derive(Debug, Error)]
pub enum CryptoError {
/// Signature verification failed.
#[error("signature verification failed for {context}")]
SignatureInvalid { context: String },
/// Decryption failed.
#[error("decryption failed: {0}")]
DecryptionFailed(String),
/// Key not found.
#[error("key not found for {0}")]
KeyNotFound(String),
/// KeyPackage invalid or expired.
#[error("KeyPackage invalid: {0}")]
KeyPackageInvalid(String),
/// Replay attack detected.
#[error("replay detected: sequence {seq} already seen from {sender}")]
ReplayDetected { sender: String, seq: u32 },
/// Wrong epoch.
#[error("wrong epoch: expected {expected}, got {got}")]
WrongEpoch { expected: u16, got: u16 },
/// MLS error (from openmls).
#[error("MLS error: {0}")]
Mls(String),
}
/// Protocol errors.
#[derive(Debug, Error)]
pub enum ProtocolError {
/// Unknown message type.
#[error("unknown message type: 0x{0:02x}")]
UnknownMessageType(u8),
/// Invalid message format.
#[error("invalid message format: {0}")]
InvalidFormat(String),
/// Version mismatch.
#[error("protocol version mismatch: expected {expected}, got {got}")]
VersionMismatch { expected: u8, got: u8 },
/// Required field missing.
#[error("required field missing: {0}")]
MissingField(String),
/// CBOR decode error.
#[error("CBOR decode error: {0}")]
CborDecode(String),
/// CBOR encode error.
#[error("CBOR encode error: {0}")]
CborEncode(String),
/// Message too large.
#[error("message too large: {size} bytes (max {max})")]
MessageTooLarge { size: usize, max: usize },
}
/// Store/cache errors.
#[derive(Debug, Error)]
pub enum StoreError {
/// Store is full.
#[error("store full: {current}/{capacity} items")]
Full { current: usize, capacity: usize },
/// Item not found.
#[error("item not found: {0}")]
NotFound(String),
/// Persistence error.
#[error("persistence error: {0}")]
Persistence(String),
/// Serialization error.
#[error("serialization error: {0}")]
Serialization(String),
}
/// Configuration errors.
#[derive(Debug, Error)]
pub enum ConfigError {
/// Invalid configuration value.
#[error("invalid config value for '{key}': {reason}")]
InvalidValue { key: String, reason: String },
/// Missing required configuration.
#[error("missing required config: {0}")]
Missing(String),
/// Configuration parse error.
#[error("config parse error: {0}")]
Parse(String),
}
/// Result type alias for mesh operations.
pub type MeshResult<T> = Result<T, MeshError>;
/// Error context extension trait for adding context to errors.
pub trait ErrorContext<T> {
/// Add context to an error.
fn context(self, context: impl Into<String>) -> MeshResult<T>;
/// Add context with a closure (lazy evaluation).
fn with_context<F>(self, f: F) -> MeshResult<T>
where
F: FnOnce() -> String;
}
impl<T, E: Into<MeshError>> ErrorContext<T> for Result<T, E> {
fn context(self, context: impl Into<String>) -> MeshResult<T> {
self.map_err(|e| {
let err = e.into();
MeshError::Internal(format!("{}: {}", context.into(), err))
})
}
fn with_context<F>(self, f: F) -> MeshResult<T>
where
F: FnOnce() -> String,
{
self.map_err(|e| {
let err = e.into();
MeshError::Internal(format!("{}: {}", f(), err))
})
}
}
/// Convert anyhow errors to MeshError.
impl From<anyhow::Error> for MeshError {
fn from(e: anyhow::Error) -> Self {
MeshError::Internal(e.to_string())
}
}
/// Helper to create transport send errors.
impl TransportError {
pub fn send_failed(dest: &TransportAddr, reason: impl Into<String>) -> Self {
Self::SendFailed {
dest: dest.to_string(),
reason: reason.into(),
}
}
pub fn connection_failed(dest: &TransportAddr, reason: impl Into<String>) -> Self {
Self::ConnectionFailed {
dest: dest.to_string(),
reason: reason.into(),
}
}
}
/// Helper to create routing errors.
impl RoutingError {
pub fn no_route(addr: &MeshAddress) -> Self {
Self::NoRoute(format!("{}", addr))
}
pub fn no_route_bytes(addr: &[u8]) -> Self {
Self::NoRoute(hex::encode(&addr[..8.min(addr.len())]))
}
}
/// Helper to create crypto errors.
impl CryptoError {
pub fn signature_invalid(context: impl Into<String>) -> Self {
Self::SignatureInvalid {
context: context.into(),
}
}
pub fn replay(sender: &MeshAddress, seq: u32) -> Self {
Self::ReplayDetected {
sender: format!("{}", sender),
seq,
}
}
}
/// Helper to create protocol errors.
impl ProtocolError {
pub fn cbor_decode(e: impl fmt::Display) -> Self {
Self::CborDecode(e.to_string())
}
pub fn cbor_encode(e: impl fmt::Display) -> Self {
Self::CborEncode(e.to_string())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn error_display() {
let err = TransportError::SendFailed {
dest: "tcp:127.0.0.1:8080".to_string(),
reason: "connection refused".to_string(),
};
assert!(err.to_string().contains("tcp:127.0.0.1:8080"));
assert!(err.to_string().contains("connection refused"));
}
#[test]
fn error_conversion() {
let transport_err = TransportError::NoTransports;
let mesh_err: MeshError = transport_err.into();
assert!(matches!(mesh_err, MeshError::Transport(_)));
}
#[test]
fn routing_error_helpers() {
let addr = MeshAddress::from_bytes([0xAB; 16]);
let err = RoutingError::no_route(&addr);
assert!(err.to_string().contains("no route"));
}
#[test]
fn crypto_error_helpers() {
let addr = MeshAddress::from_bytes([0xCD; 16]);
let err = CryptoError::replay(&addr, 42);
assert!(err.to_string().contains("42"));
}
#[test]
fn context_extension() {
fn fallible() -> Result<(), TransportError> {
Err(TransportError::NoTransports)
}
let result: MeshResult<()> = fallible().context("during startup");
assert!(result.is_err());
let err_str = result.unwrap_err().to_string();
assert!(err_str.contains("during startup"));
}
}

Some files were not shown because too many files have changed in this diff Show More