Vision: FAPP is just one service on a generic platform. Same infrastructure can support: - Housing (rooms, flats) - Repair (craftsmen) - Tutoring - Medical appointments - Legal consultations - Events/tickets - Custom services Key concepts: - Service ID namespacing (32-bit) - Generic ServiceMessage envelope - ServiceRouter with pluggable handlers - ServiceStore trait for per-service caching - Generic verification framework - Migration path for existing FAPP Architecture: Applications → Service Layer → Mesh Layer → Transport
12 KiB
Mesh Service Layer — Generic Application Protocol
Vision
FAPP (therapy slots) ist nur eine Anwendung. Die gleiche Infrastruktur könnte tragen:
| Service | Announce | Query | Reserve |
|---|---|---|---|
| FAPP | Therapist slots | Patient search | Book appointment |
| Housing | Available rooms/flats | Tenant search | Reserve viewing |
| Repair | Craftsman availability | Customer search | Book repair |
| Tutoring | Tutor slots | Student search | Book lesson |
| Medical | Doctor appointments | Patient search | Book slot |
| Legal | Lawyer availability | Client search | Book consultation |
| Volunteer | Helper availability | Org search | Coordinate help |
| Events | Open seats/tickets | Attendee search | Reserve seat |
Gemeinsames Muster:
- Provider announces availability
- Consumer queries anonymously
- Match → encrypted reservation
- Confirmation
Design Principles
1. Service Namespacing
Jeder Service hat einen Service ID (32-bit):
pub const SERVICE_FAPP: u32 = 0x0001; // Psychotherapy
pub const SERVICE_HOUSING: u32 = 0x0002; // Housing/Rooms
pub const SERVICE_REPAIR: u32 = 0x0003; // Craftsmen
pub const SERVICE_TUTOR: u32 = 0x0004; // Tutoring
pub const SERVICE_MEDICAL: u32 = 0x0005; // Medical appointments
// ...
pub const SERVICE_CUSTOM: u32 = 0xFFFF; // User-defined
2. Generic Message Envelope
/// Generic service message that wraps any application payload.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct ServiceMessage {
/// Service identifier (which application).
pub service_id: u32,
/// Message type within service (Announce=1, Query=2, Response=3, etc.).
pub message_type: u8,
/// Version for forward compatibility.
pub version: u8,
/// Application-specific CBOR payload.
pub payload: Vec<u8>,
/// Provider's mesh address.
pub provider_address: [u8; 16],
/// Ed25519 signature over (service_id, message_type, version, payload).
pub signature: Vec<u8>,
/// Propagation control.
pub hop_count: u8,
pub max_hops: u8,
pub ttl_hours: u16,
pub timestamp: u64,
}
3. Capability System
Erweitere die Capability Flags:
// Base capabilities (existing)
pub const CAP_RELAY: u16 = 0x0001;
pub const CAP_STORE: u16 = 0x0002;
pub const CAP_GATEWAY: u16 = 0x0004;
// Service-specific capabilities (dynamic)
// Format: 0xSSCC where SS = service_id (high byte), CC = capability
pub const CAP_SERVICE_PROVIDER: u16 = 0x0100; // Can announce
pub const CAP_SERVICE_RELAY: u16 = 0x0200; // Caches & forwards
pub const CAP_SERVICE_CONSUMER: u16 = 0x0400; // Can query
// Example: FAPP therapist
// capabilities = CAP_RELAY | (SERVICE_FAPP << 8) | CAP_SERVICE_PROVIDER
4. Schema Registry
Services definieren ihr Schema — aber das Schema ist nicht im Wire-Protokoll:
/// Service schema definition (stored locally, not transmitted).
pub struct ServiceSchema {
pub service_id: u32,
pub name: String,
pub version: u8,
/// CBOR schema for Announce payload.
pub announce_schema: Vec<u8>,
/// CBOR schema for Query payload.
pub query_schema: Vec<u8>,
/// CBOR schema for Response payload.
pub response_schema: Vec<u8>,
/// Required verification level.
pub min_verification: u8,
/// Human-readable description.
pub description: String,
}
Nodes können Schemas per Out-of-Band bekommen (Website, Git, DNS TXT records).
5. Service Router
pub struct ServiceRouter {
/// Registered service handlers.
handlers: HashMap<u32, Box<dyn ServiceHandler>>,
/// Shared routing table.
routes: Arc<RwLock<RoutingTable>>,
/// Transport manager.
transports: Arc<TransportManager>,
/// Per-service stores.
stores: HashMap<u32, Box<dyn ServiceStore>>,
}
pub trait ServiceHandler: Send + Sync {
fn service_id(&self) -> u32;
fn handle_announce(&self, msg: &ServiceMessage) -> ServiceAction;
fn handle_query(&self, msg: &ServiceMessage) -> ServiceAction;
fn handle_response(&self, msg: &ServiceMessage) -> ServiceAction;
}
pub trait ServiceStore: Send + Sync {
fn store(&mut self, msg: ServiceMessage) -> bool;
fn query(&self, filter: &[u8]) -> Vec<ServiceMessage>;
fn gc_expired(&mut self) -> usize;
}
Wire Protocol
Message Format
┌──────────────────────────────────────────────────────────┐
│ Byte 0-3: Service ID (u32 LE) │
│ Byte 4: Message Type (1=Announce, 2=Query, 3=Resp) │
│ Byte 5: Version │
│ Byte 6-7: Payload Length (u16 LE) │
│ Byte 8-23: Provider Address (16 bytes) │
│ Byte 24-87: Signature (64 bytes) │
│ Byte 88: Hop Count │
│ Byte 89: Max Hops │
│ Byte 90-91: TTL Hours (u16 LE) │
│ Byte 92-99: Timestamp (u64 LE) │
│ Byte 100+: CBOR Payload │
└──────────────────────────────────────────────────────────┘
Total header: 100 bytes + variable payload.
Message Types
| Type | Value | Direction |
|---|---|---|
| Announce | 0x01 | Provider → Mesh |
| Query | 0x02 | Consumer → Mesh |
| Response | 0x03 | Relay/Provider → Consumer |
| Reserve | 0x04 | Consumer → Provider |
| Confirm | 0x05 | Provider → Consumer |
| Cancel | 0x06 | Either → Other |
| Update | 0x07 | Provider → Mesh (partial update) |
| Revoke | 0x08 | Provider → Mesh (cancel announce) |
Example: Housing Service
// Define the service
pub const SERVICE_HOUSING: u32 = 0x0002;
#[derive(Serialize, Deserialize)]
pub struct HousingAnnounce {
pub room_type: RoomType, // WG, Apartment, House
pub size_sqm: u16,
pub rent_euros: u16,
pub available_from: u64,
pub plz: String,
pub amenities: Vec<Amenity>,
pub landlord_profile_url: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct HousingQuery {
pub room_type: Option<RoomType>,
pub max_rent: Option<u16>,
pub min_size: Option<u16>,
pub plz_prefix: Option<String>,
pub available_before: Option<u64>,
}
// Register with ServiceRouter
router.register(HousingHandler::new(housing_store));
Migration Path for FAPP
FAPP kann auf die generische Schicht migriert werden:
// Before: FAPP-specific
let announce = SlotAnnounce::new(...);
fapp_router.broadcast_announce(announce)?;
// After: Generic service layer
let payload = FappAnnouncePayload { ... };
let msg = ServiceMessage::announce(SERVICE_FAPP, &identity, payload)?;
service_router.broadcast(msg)?;
Backwards compatibility:
- Alte FAPP-Nodes verstehen nur FAPP-Wire-Format
- Neue Nodes können beide Formate
- Transition über 6 Monate, dann deprecate altes Format
Verification Framework
Generische Verification die für alle Services gilt:
pub struct Verification {
/// Who endorsed this provider.
pub endorser_address: [u8; 16],
/// Signature over (provider_address, service_id, timestamp).
pub signature: [u8; 64],
/// Unix timestamp.
pub timestamp: u64,
/// Verification level achieved.
pub level: u8,
/// Service-specific verification data (e.g., license number).
pub credential_hash: Option<[u8; 32]>,
/// Human-readable reason.
pub reason: String,
}
/// Verification levels (generic across services).
pub const VERIFY_NONE: u8 = 0;
pub const VERIFY_ENDORSED: u8 = 1; // Web-of-trust
pub const VERIFY_REGISTRY: u8 = 2; // Official registry
pub const VERIFY_CREDENTIAL: u8 = 3; // Verified credential (eHBA, etc.)
Service Discovery
Wie finden Nodes heraus welche Services existieren?
Option A: Hardcoded Core Services
const CORE_SERVICES: &[u32] = &[
SERVICE_FAPP,
SERVICE_HOUSING,
SERVICE_REPAIR,
];
Option B: Service Announce
/// Node announces which services it supports.
pub struct ServiceCapabilityAnnounce {
pub node_address: [u8; 16],
pub services: Vec<ServiceCapability>,
pub signature: [u8; 64],
}
pub struct ServiceCapability {
pub service_id: u32,
pub roles: u8, // Provider | Relay | Consumer
pub version: u8,
}
Option C: DNS-SD / mDNS
_fapp._mesh._udp.local.
_housing._mesh._udp.local.
Recommendation: Start with Option A (hardcoded), add Option B when needed.
Privacy Considerations
| Aspect | Design |
|---|---|
| Provider identity | Public (bound to credential) |
| Consumer identity | Anonymous (no ID in queries) |
| Query content | Visible to relays (filter by service) |
| Reservation | E2E encrypted to provider |
| Location | Coarse only (PLZ, not address) |
Cost Model
Relay nodes do work. How to compensate?
| Model | Pros | Cons |
|---|---|---|
| Altruism | Simple, no tokens | Free-rider problem |
| Reciprocity | "I relay, you relay" | Complex accounting |
| Micropayments | Fair, incentivizes | Needs payment rails |
| Subscription | Predictable | Centralization risk |
Recommendation: Start altruistic, add optional micropayments later.
Implementation Roadmap
Phase 1: Generic Layer (Now)
ServiceMessagestructServiceRouterwith handler registrationServiceStoretrait- Migrate FAPP to generic layer
- Tests
Phase 2: Second Service (Q2 2026)
- Pick one: Housing or Tutoring
- Implement as second service on same layer
- Prove the abstraction works
Phase 3: Verification Framework (Q3 2026)
- Generic endorsement messages
- Verification levels
- Trusted relay network
Phase 4: Service Discovery (Q4 2026)
- ServiceCapabilityAnnounce
- Dynamic service registration
- Schema distribution
Open Questions
- Payload size limits? LoRa vs. TCP have very different constraints.
- Query routing? Flood vs. DHT vs. gossip?
- Cross-service queries? "Find therapist OR coach near me"
- Service-specific rate limits? Housing might need different limits than FAPP.
- Governance? Who assigns service IDs? IANA-style registry?
Conclusion
QuicProQuo's mesh layer can become a generic decentralized service platform:
┌─────────────────────────────────────────────────────────────┐
│ Application Services │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ FAPP │ │ Housing │ │ Repair │ │ Custom │ ... │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │ │
│ ─────┴────────────┴────────────┴────────────┴────────── │
│ Service Layer │
│ ServiceMessage, ServiceRouter, Verification │
│ ─────────────────────────────────────────────────────── │
│ Mesh Layer │
│ MeshRouter, RoutingTable, Announce, Store-and-Forward │
│ ─────────────────────────────────────────────────────── │
│ Transport Layer │
│ Iroh (QUIC), TCP, LoRa, Serial │
└─────────────────────────────────────────────────────────────┘
The mesh IS the platform. No central servers, no vendor lock-in, no single point of failure.