feat: ACT/ECT strategy, package restructure, draft -01/-02 prep
Strategic work for IETF submission of draft-nennemann-act-01 and
draft-nennemann-wimse-ect-02:
Package restructure:
- move ACT and ECT refimpls to workspace/packages/{act,ect}/
- ietf-act and ietf-ect distribution names (sibling packages)
- cross-spec interop test plan (INTEROP-TEST-PLAN.md)
ACT draft -01 revisions:
- rename 'par' claim to 'pred' (align with ECT)
- rename 'Agent Compact Token' to 'Agent Context Token' (semantic
alignment with ECT family)
- add Applicability section (MCP, OpenAI, LangGraph, A2A, CrewAI)
- add DAG vs Linear Delegation Chains section (differentiator vs
txn-tokens-for-agents actchain, Agentic JWT, AIP/IBCTs)
- add Related Work: AIP, SentinelAgent, Agentic JWT, txn-tokens-for-agents,
HDP, SCITT-AI-agent-execution
- pin SCITT arch to -22, note AUTH48 status
Outreach drafts:
- Emirdag liaison email (SCITT-AI coordination)
- OAuth ML response on txn-tokens-for-agents-06
Strategy document:
- STRATEGY.md with phased action plan, risk register, timeline
Submodule:
- update workspace/drafts/ietf-wimse-ect pointer to -02 commit
This commit is contained in:
224
workspace/STRATEGY.md
Normal file
224
workspace/STRATEGY.md
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
# ACT + ECT IETF Strategy
|
||||||
|
|
||||||
|
**Author**: Christian Nennemann
|
||||||
|
**Date**: 2026-04-12
|
||||||
|
**Status**: Active
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Executive Summary
|
||||||
|
|
||||||
|
Two Internet-Drafts, one strategy: **ACT** (general) + **ECT** (WIMSE profile) as a complementary spec family for AI agent authorization and execution accountability.
|
||||||
|
|
||||||
|
**The window**: In the last 8 weeks, 14+ competing IETF individual drafts and 7+ high-relevance arXiv papers appeared. The space is crowding fast. **Ship -01/-02 within 2 weeks**; establish IETF 123 (July 2026) as the landing point.
|
||||||
|
|
||||||
|
**The position**: ACT is the only spec combining (a) two-phase JWT lifecycle, (b) DAG-based DAG predecessor structure, and (c) standards-track independence from proprietary agent frameworks. ECT is the only WIMSE-aligned execution-context spec.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Current State (What We Have)
|
||||||
|
|
||||||
|
### Artifacts in place
|
||||||
|
|
||||||
|
| Artifact | Location | Status |
|
||||||
|
|---|---|---|
|
||||||
|
| ACT draft | `packages/act/draft-nennemann-act-01.md` | -01, ready to review |
|
||||||
|
| ECT draft | `drafts/ietf-wimse-ect/draft-nennemann-wimse-ect.md` | -02, needs HTTP header update |
|
||||||
|
| ACT refimpl | `packages/act/` (ietf-act) | 103 tests pass, `pred` + Context rename done |
|
||||||
|
| ECT refimpl | `packages/ect/` (ietf-ect) | 56 tests pass, `inp_hash` bug fixed |
|
||||||
|
| ACT applicability section | In draft §1.5 | MCP, OpenAI, LangGraph, A2A, CrewAI, WIMSE-ECT |
|
||||||
|
| Diff doc vs Txn-Agents | `drafts/ietf-wimse-ect/DIFF-vs-txn-tokens-for-agents.md` | Done, ~1235 words |
|
||||||
|
| WIMSE mailing list email | `drafts/ietf-wimse-ect/wimse-intro-email.md` | Done, ~390 words |
|
||||||
|
|
||||||
|
### Recent completed work
|
||||||
|
|
||||||
|
- `par` → `pred` rename across ACT (spec alignment with ECT)
|
||||||
|
- "Agent Compact Token" → "Agent Context Token" rename (semantic alignment with ECT)
|
||||||
|
- Package restructure to `workspace/packages/{act,ect}/`
|
||||||
|
- ECT `inp_hash` format bug fix (removed `sha-256:` prefix)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Landscape (What Just Happened)
|
||||||
|
|
||||||
|
### Critical drafts published April 7–11, 2026
|
||||||
|
|
||||||
|
| Draft | Impact | Response |
|
||||||
|
|---|---|---|
|
||||||
|
| `draft-emirdag-scitt-ai-agent-execution-00` | SCITT profile for AgentInteractionRecord (AIR) | **Propose liaison**: ACT = lifecycle, AIR = anchor payload |
|
||||||
|
| `draft-oauth-transaction-tokens-for-agents-06` | Amazon's `actchain` competes with ACT's DAG | **Differentiate**: linear chain vs DAG (fork/join) |
|
||||||
|
| `draft-ietf-wimse-http-signature-03` | `Wimse-Audience` header **removed** → `wimse-aud` param | **Breaking change — fix ECT immediately** |
|
||||||
|
| `draft-ietf-oauth-transaction-tokens-08` | In WG Last Call → RFC imminent | Lock references before publication |
|
||||||
|
| `draft-ietf-scitt-architecture-22` | In AUTH48 → RFC imminent | Update SCITT refs to RFC number |
|
||||||
|
|
||||||
|
### Competitive arXiv papers (Mar–Apr 2026)
|
||||||
|
|
||||||
|
- **2603.24775 (AIP/IBCTs)** — closest technical competitor, JWT + Biscuit/Datalog, zero auth on ~2000 MCP servers
|
||||||
|
- **2604.02767 (SentinelAgent)** — formal Delegation Chain Calculus
|
||||||
|
- **2509.13597 (Agentic JWT)** — prior linear chain JWT
|
||||||
|
- **2603.23801 (AgentRFC — Composition Safety)** — theoretical grounding for DAG-level tracking
|
||||||
|
|
||||||
|
### Strategic openings
|
||||||
|
|
||||||
|
- `draft-ietf-wimse-arch-07 §3.3.9` — WG arch doc **already names AI/ML intermediaries as workloads**; ECT fills this gap
|
||||||
|
- **DAWN potential new WG** (`draft-king-dawn-requirements-00`, 2026-04-11) — agent discovery; ACT identity claims are natural payload
|
||||||
|
- **NIST/NCCoE Concept Paper** — US government validation of standards-first agent identity approach
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Positioning Strategy
|
||||||
|
|
||||||
|
### The three-sentence pitch
|
||||||
|
|
||||||
|
> ACT is a two-phase JWT lifecycle — the authorization mandate transitions to a tamper-evident execution record, producing a cryptographically verifiable DAG of agent invocations. ECT is the WIMSE profile that binds ACT-style execution records to workload identity with assurance levels. Together they close the agent accountability gap that OAuth/WIMSE/SCITT leave partially open.
|
||||||
|
|
||||||
|
### Differentiation matrix
|
||||||
|
|
||||||
|
| Against | How ACT/ECT differ |
|
||||||
|
|---|---|
|
||||||
|
| `draft-oauth-transaction-tokens-for-agents` | Two-phase lifecycle (authorization → proof-of-execution), DAG (not linear `actchain`), works without AuthZ server |
|
||||||
|
| `draft-emirdag-scitt-ai-agent-execution` | Lifecycle layer complement, not competitor; ACT produces what AIR anchors |
|
||||||
|
| AIP/IBCTs (arXiv 2603.24775) | Standards-track IETF home; JWT-only (no Biscuit/Datalog complexity) |
|
||||||
|
| `draft-helixar-hdp-agentic-delegation` | JWT/JOSE-standard (vs raw JSON), DAG (vs linear), IETF path |
|
||||||
|
| SentinelAgent (arXiv 2604.02767) | Standards deployability (vs formal calculus) |
|
||||||
|
| Agentic JWT (arXiv 2509.13597) | Two-phase lifecycle; DAG vs linear chain |
|
||||||
|
|
||||||
|
### Non-goals (say this explicitly)
|
||||||
|
|
||||||
|
- ACT does not replace WIMSE WIT/WPT — it sits above
|
||||||
|
- ACT does not replace OAuth/Txn-Tokens — it profiles for agent semantics
|
||||||
|
- ACT does not require SCITT — but integrates cleanly with it
|
||||||
|
- ECT does not carry identity — it carries execution context
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Action Plan
|
||||||
|
|
||||||
|
### Phase A — Urgent technical updates (this week)
|
||||||
|
|
||||||
|
- [ ] **A1**: Update ECT HTTP header section — replace `Wimse-Audience` with `wimse-aud` signature metadata parameter per `draft-ietf-wimse-http-signature-03`
|
||||||
|
- [ ] **A2**: Update SCITT references in ACT — point to `draft-ietf-scitt-architecture-22` (AUTH48); note RFC-to-be
|
||||||
|
- [ ] **A3**: Update Txn-Tokens references in ACT/ECT — lock to `draft-ietf-oauth-transaction-tokens-08`
|
||||||
|
- [ ] **A4**: Add "DAG vs linear chain" section to ACT — key technical differentiator
|
||||||
|
- [ ] **A5**: Add Related Work additions to ACT:
|
||||||
|
- AIP/IBCTs (arXiv 2603.24775)
|
||||||
|
- SentinelAgent (arXiv 2604.02767)
|
||||||
|
- Agentic JWT (arXiv 2509.13597)
|
||||||
|
- Txn-Tokens-for-Agents-06
|
||||||
|
- HDP (`draft-helixar-hdp-agentic-delegation`)
|
||||||
|
- [ ] **A6**: Add Related Work additions to ECT:
|
||||||
|
- WIMSE arch §3.3.9 (explicit)
|
||||||
|
- Composition Safety (arXiv 2603.23801)
|
||||||
|
- MIGT taxonomy (arXiv 2604.06148)
|
||||||
|
- NIST/NCCoE Concept Paper
|
||||||
|
- [ ] **A7**: Commit all current work to git (workspace + research.ietf subrepo)
|
||||||
|
|
||||||
|
### Phase B — External engagement (next 1–2 weeks)
|
||||||
|
|
||||||
|
- [ ] **B1**: Email Emirdag (VERIDIC) — propose SCITT-AI + ACT liaison; coordinate AIR payload format with ACT execution-phase claims
|
||||||
|
- [ ] **B2**: Submit ACT -01 to datatracker
|
||||||
|
- [ ] **B3**: Submit ECT -02 to datatracker
|
||||||
|
- [ ] **B4**: Post ECT intro email to wimse@ietf.org with diff doc link
|
||||||
|
- [ ] **B5**: Post short response to OAuth WG on Txn-Tokens-for-Agents-06 — compare `actchain` (linear) vs ACT `pred` (DAG), offer as complementary not competitive
|
||||||
|
- [ ] **B6**: Request 10-min slot at IETF 123 WIMSE session (July 2026)
|
||||||
|
- [ ] **B7**: Track DAWN WG charter formation — if charters, submit positioning comment on how ACT identity claims serve discovery
|
||||||
|
|
||||||
|
### Phase C — IETF 123 preparation (May–June 2026)
|
||||||
|
|
||||||
|
- [ ] **C1**: Iterate ACT/ECT based on mailing list feedback
|
||||||
|
- [ ] **C2**: Prepare 10-min WIMSE slides (focus on: gap filled, relationship to adopted drafts, ECT's role in execution context propagation)
|
||||||
|
- [ ] **C3**: Prepare 5-min OAuth slot request if Txn-Tokens-for-Agents discussion opens
|
||||||
|
- [ ] **C4**: Reference implementation hardening: test vectors, interop with at least one other implementation
|
||||||
|
|
||||||
|
### Phase D — Post-IETF 123 (August 2026+)
|
||||||
|
|
||||||
|
- [ ] **D1**: Based on WIMSE reception: either iterate toward WG adoption or pivot to BoF-style workshop
|
||||||
|
- [ ] **D2**: If SCITT-AI liaison forms: draft joint implementation report
|
||||||
|
- [ ] **D3**: If DAWN charters: submit ACT positioning statement
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Timeline
|
||||||
|
|
||||||
|
```
|
||||||
|
2026-04-12 Strategy finalized (today)
|
||||||
|
2026-04-12 Phase A starts
|
||||||
|
2026-04-19 Phase A complete, ACT-01 + ECT-02 submitted
|
||||||
|
2026-04-20 Phase B starts (WIMSE ML post + Emirdag outreach)
|
||||||
|
2026-05-01 All external engagement initiated
|
||||||
|
2026-07-xx IETF 123 (target: WIMSE 10-min slot)
|
||||||
|
2026-08-xx Post-IETF 123 review, decide WG adoption strategy
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Risk Register
|
||||||
|
|
||||||
|
| Risk | Likelihood | Impact | Mitigation |
|
||||||
|
|---|---|---|---|
|
||||||
|
| WIMSE WG rejects ECT as out-of-charter | Medium | High | Cite arch §3.3.9 explicitly; frame as charter-aligned |
|
||||||
|
| Amazon Txn-Tokens-for-Agents gets OAuth WG adoption first | High | Medium | Differentiate at DAG/lifecycle level; position as complementary layer |
|
||||||
|
| SCITT-AI (Emirdag) adopted, ACT seen as redundant | Medium | High | Proactive liaison; position as lifecycle vs anchoring |
|
||||||
|
| DAWN charters without ACT positioning | Medium | Medium | Submit positioning statement during charter review |
|
||||||
|
| 14+ competing drafts fragment the space | High | Medium | Focus on ACT's unique two-phase lifecycle; cite competitors as related work |
|
||||||
|
| Independent-submission path stalls for ACT | Medium | Medium | Keep ECT on WG-adoption path; ACT can stay independent longer if needed |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Success Criteria
|
||||||
|
|
||||||
|
### 30-day criteria
|
||||||
|
- ACT-01 + ECT-02 on datatracker
|
||||||
|
- WIMSE mailing list engagement (≥3 replies from chairs/contributors)
|
||||||
|
- Emirdag liaison conversation started
|
||||||
|
|
||||||
|
### 90-day criteria (IETF 123 timing)
|
||||||
|
- 10-minute WIMSE agenda slot secured
|
||||||
|
- ≥1 independent implementation of ACT or ECT outside our refimpl
|
||||||
|
- Referenced by at least 2 other drafts
|
||||||
|
|
||||||
|
### 180-day criteria
|
||||||
|
- WIMSE WG adoption call for ECT (or clear path to it)
|
||||||
|
- SCITT-AI joint profile or explicit coordination
|
||||||
|
- ACT independent submission moving toward RFC Editor queue
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Dependencies and Open Decisions
|
||||||
|
|
||||||
|
### External dependencies
|
||||||
|
- `draft-ietf-scitt-architecture` → RFC (timing unknown, AUTH48 now)
|
||||||
|
- `draft-ietf-oauth-transaction-tokens-08` → RFC (WG Last Call now)
|
||||||
|
- `draft-ietf-wimse-http-signature` → needs breaking change propagated
|
||||||
|
- WIMSE WG charter interpretation (chairs' call)
|
||||||
|
|
||||||
|
### Open decisions (need user input)
|
||||||
|
- Approach to Emirdag: liaison email, co-authorship offer, or just citation?
|
||||||
|
- Publish refimpls to PyPI? (currently package names `ietf-act`/`ietf-ect` reserved but not published — **no publishing without explicit user approval**)
|
||||||
|
- Repo strategy: single monorepo, or split ACT/ECT into separate Git repos for separate draft homes?
|
||||||
|
- IETF 123 travel: attend in person or remote?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. References
|
||||||
|
|
||||||
|
### Our work
|
||||||
|
- `packages/act/draft-nennemann-act-01.md`
|
||||||
|
- `drafts/ietf-wimse-ect/draft-nennemann-wimse-ect.md` (docname -02)
|
||||||
|
- `drafts/ietf-wimse-ect/DIFF-vs-txn-tokens-for-agents.md`
|
||||||
|
- `drafts/ietf-wimse-ect/wimse-intro-email.md`
|
||||||
|
|
||||||
|
### Key competing/complementary drafts
|
||||||
|
- draft-oauth-transaction-tokens-for-agents-06 (Raut/Amazon)
|
||||||
|
- draft-emirdag-scitt-ai-agent-execution-00 (VERIDIC)
|
||||||
|
- draft-helixar-hdp-agentic-delegation-00
|
||||||
|
- draft-king-dawn-requirements-00 (potential new WG)
|
||||||
|
- draft-ietf-wimse-arch-07 (cite §3.3.9)
|
||||||
|
- draft-ietf-wimse-http-signature-03 (breaking change)
|
||||||
|
|
||||||
|
### Key arXiv references
|
||||||
|
- 2603.24775 — AIP / IBCTs
|
||||||
|
- 2604.02767 — SentinelAgent
|
||||||
|
- 2603.23801 — AgentRFC (Composition Safety)
|
||||||
|
- 2509.13597 — Agentic JWT
|
||||||
|
- 2604.06148 — MIGT taxonomy
|
||||||
1
workspace/act/MOVED.md
Normal file
1
workspace/act/MOVED.md
Normal file
@@ -0,0 +1 @@
|
|||||||
|
Canonical location moved to workspace/packages/act/
|
||||||
Submodule workspace/drafts/ietf-wimse-ect updated: ba38569319...d47f041265
30
workspace/drafts/outreach/emirdag-liaison-email.md
Normal file
30
workspace/drafts/outreach/emirdag-liaison-email.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
Dear Dr. Emirdag,
|
||||||
|
|
||||||
|
Congratulations on the publication of draft-emirdag-scitt-ai-agent-execution-00 earlier today. I came across it while tracking SCITT-adjacent work on AI agent accountability, and I wanted to reach out because the positioning looks genuinely complementary to a pair of drafts I have been developing.
|
||||||
|
|
||||||
|
Brief introduction: I am Christian Nennemann, an independent researcher working on execution-context and lifecycle tokens for agentic systems. My current IETF work consists of:
|
||||||
|
|
||||||
|
- draft-nennemann-act-01 (Agent Context Token): a JWT-based two-phase lifecycle — a pre-execution Mandate token carrying authorization, scope, and input commitments, followed by a post-execution Record token committing to outputs and linking back via `pred`. Multiple Records form a DAG, signed with Ed25519 or ES256.
|
||||||
|
- draft-nennemann-wimse-ect-02 (Execution Context Token): a WIMSE profile with three assurance levels and identity binding for the workload that produced a given execution.
|
||||||
|
|
||||||
|
Reading your AIR specification, the layering seems fairly clean: ACT defines *what* is being anchored — the lifecycle token with its authorization proof, input/output commitments, and causal predecessor links — while AIR defines *how* it is anchored on a SCITT transparency service as a COSE_Sign1 payload with its hash-chain, four-step verification, and EU AI Act / NIST AI RMF mappings. There is real conceptual overlap on input/output hashing, reasoning capture, identity, timing, and causality, which suggests that coordinating now would save both of us retrofitting later.
|
||||||
|
|
||||||
|
A few concrete options, in rough order of effort:
|
||||||
|
|
||||||
|
(a) Cross-citations in both drafts, establishing the "ACT record → AIR payload → SCITT receipt" flow as the intended pipeline.
|
||||||
|
(b) A short shared section on "Anchoring ACT Records in SCITT" — either folded into ACT-02 or as a small companion draft if you prefer neutral ground.
|
||||||
|
(c) Aligning claim semantics where they overlap — in particular input/output hash representation (I currently use `inp_hash` / `out_hash`, JWT-side) so that translation to AIR is lossless.
|
||||||
|
(d) If we both attend IETF 123, a joint slot in SCITT or a side meeting could make the layering concrete for the WG.
|
||||||
|
|
||||||
|
I would be happy to send you the current ACT and ECT drafts and to review yours in detail before either of us adds formal cross-references. Low-pressure — mainly wanted to flag the alignment while the drafts are still malleable.
|
||||||
|
|
||||||
|
Looking forward to your thoughts.
|
||||||
|
|
||||||
|
Best regards,
|
||||||
|
Christian Nennemann
|
||||||
|
Independent Researcher
|
||||||
|
[contact details]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Suggested subject line:** Liaison proposal: ACT/ECT lifecycle tokens and SCITT-AI AIR — complementary layering
|
||||||
71
workspace/drafts/outreach/oauth-ml-response.md
Normal file
71
workspace/drafts/outreach/oauth-ml-response.md
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
**To:** oauth@ietf.org
|
||||||
|
**From:** Christian Nennemann <ietf@nennemann.de>
|
||||||
|
**Subject:** draft-oauth-transaction-tokens-for-agents-06: complementary work on DAG-based delegation (draft-nennemann-act)
|
||||||
|
|
||||||
|
Hi all,
|
||||||
|
|
||||||
|
I noticed the publication of draft-oauth-transaction-tokens-for-agents-06
|
||||||
|
(Raut et al., 2026-04-11) and wanted to share some complementary work that
|
||||||
|
addresses an adjacent slice of the agent-delegation problem space. The
|
||||||
|
Amazon draft fills a real gap at the OAuth authorization-server layer, and
|
||||||
|
I think there is useful coordination potential rather than overlap.
|
||||||
|
|
||||||
|
# Technical difference in one paragraph
|
||||||
|
|
||||||
|
draft-oauth-transaction-tokens-for-agents introduces `actchain` as an
|
||||||
|
ordered array documenting delegation history, plus `agentic_ctx` carrying
|
||||||
|
type/version/intent/operational constraints, with a split between
|
||||||
|
principal-initiated and autonomous flow types. Our work
|
||||||
|
(draft-nennemann-act-01) models delegation history as a DAG through a
|
||||||
|
`pred` (predecessor) claim that is itself an array of parent token
|
||||||
|
references. A linear `actchain` is a special case of the DAG form where
|
||||||
|
every node has exactly one predecessor.
|
||||||
|
|
||||||
|
# Why a DAG, concretely
|
||||||
|
|
||||||
|
Consider an agent that fans out to N parallel sub-agents (e.g. one per
|
||||||
|
data source) and then synthesizes a single response from their results.
|
||||||
|
The synthesis step has N predecessors, not one. A linear `actchain`
|
||||||
|
cannot express this fan-in; you would have to either linearize artificially
|
||||||
|
(losing causality) or emit N parallel chains (losing the join). With a
|
||||||
|
DAG-valued `pred`, the synthesis token references all N predecessor tokens
|
||||||
|
directly, and a verifier can walk the graph to check that each parallel
|
||||||
|
branch was authorized and unexpired. Fork, join, and diamond topologies
|
||||||
|
fall out of the same structure.
|
||||||
|
|
||||||
|
# Layering, not competition
|
||||||
|
|
||||||
|
These two drafts sit at different layers:
|
||||||
|
|
||||||
|
- Txn-Tokens-for-Agents is anchored at an OAuth authorization server:
|
||||||
|
the AS mints and validates tokens, and `actchain` is read in the
|
||||||
|
context of an AS-issued transaction token.
|
||||||
|
- ACT is designed for peer-to-peer agent orchestration without
|
||||||
|
requiring an AS in the hot path — useful for multi-vendor agent
|
||||||
|
meshes where no single AS is authoritative. It is transport-agnostic
|
||||||
|
and leans on JWS for provenance.
|
||||||
|
|
||||||
|
An AS-issued txn-token could carry an ACT-shaped `pred` graph
|
||||||
|
internally, or an ACT chain could terminate at an AS that upgrades it
|
||||||
|
into a txn-token for a specific resource. The two seem stackable.
|
||||||
|
|
||||||
|
# Offer
|
||||||
|
|
||||||
|
Happy to compare test vectors, especially around:
|
||||||
|
|
||||||
|
- claim naming: `agentic_ctx` (Raut) vs ACT's `task` claim — is there
|
||||||
|
an opportunity to align on a shared intent/constraint shape so
|
||||||
|
downstream verifiers don't have to parse both?
|
||||||
|
- linear-subset interop: confirming that a degenerate DAG (each node
|
||||||
|
one parent) round-trips cleanly to/from `actchain`.
|
||||||
|
- autonomous-flow semantics: how ACT's unattended-delegation marker
|
||||||
|
maps onto Raut's autonomous flow type.
|
||||||
|
|
||||||
|
ACT draft: https://datatracker.ietf.org/doc/draft-nennemann-act/
|
||||||
|
|
||||||
|
Feedback welcome, on- or off-list.
|
||||||
|
|
||||||
|
Best,
|
||||||
|
Christian Nennemann
|
||||||
|
Independent Researcher
|
||||||
|
ietf@nennemann.de
|
||||||
113
workspace/packages/INTEROP-TEST-PLAN.md
Normal file
113
workspace/packages/INTEROP-TEST-PLAN.md
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
# ACT / ECT Cross-Spec Interop Test Plan
|
||||||
|
|
||||||
|
**Status**: Draft (Task C4 preparation — planning only, not yet implemented)
|
||||||
|
**Scope**: Python refimpls `ietf-act` (Phase 1/2, 103 tests) and `ietf-ect` (single-phase, 56 tests)
|
||||||
|
**Deliverable**: `packages/interop/tests/test_interop.py` + compatibility matrix docs
|
||||||
|
|
||||||
|
## 1. Goals and Non-Goals
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
- Empirically document which shared claims round-trip cleanly between refimpls.
|
||||||
|
- Surface real format-level incompatibilities (hash encoding, typ header, algorithm support) rather than assume the spec-level claim overlap implies wire interop.
|
||||||
|
- Produce a user-facing **compatibility matrix** that implementers can rely on when building bridges between Phase 2 ACT Records and ECT payloads.
|
||||||
|
- Provide executable regression tests so future changes to either refimpl cannot silently break the documented interop level without CI noticing.
|
||||||
|
|
||||||
|
### Non-Goals
|
||||||
|
- Propose spec unification or new shared claim registries.
|
||||||
|
- Build a lossy translator/bridge between ACT Records and ECT payloads.
|
||||||
|
- Test `typ` cross-acceptance — `act+jwt` vs `exec+jwt` MUST remain distinct token types.
|
||||||
|
- Forge one token type as the other.
|
||||||
|
- Add new crypto backends (e.g., Ed25519 support) to ECT as part of this work.
|
||||||
|
|
||||||
|
## 2. Known Shape of the Problem
|
||||||
|
|
||||||
|
Shared claims (by name): `jti`, `wid`, `iat`, `exp`, `aud`, `exec_act`, `pred`, `inp_hash`, `out_hash`.
|
||||||
|
|
||||||
|
Confirmed divergences discovered while reading the code:
|
||||||
|
- **Hash encoding mismatch**: ACT `b64url_sha256()` emits plain base64url (e.g. `n4bQgYhMfWWaL-qgxVrQFaO_TxsrC4Is0V1sFbDwCgg`). ECT `validate_hash_format()` requires `alg:base64url` form (e.g. `sha-256:...`) and raises on plain b64url. The briefing says this was "recently fixed to match ACT's plain base64url format" but the ECT validator still requires the prefix — plan must include a reproducer.
|
||||||
|
- **Algorithm**: ACT supports `EdDSA` + `ES256`; ECT hard-codes `ES256` (see `ect/verify.py`, line 59, `"ect: expected ES256"`).
|
||||||
|
- **Typ header**: ACT requires `act+jwt`; ECT requires `exec+jwt` (with legacy `wimse-exec+jwt`). Neither accepts the other — and per anti-goals, neither should.
|
||||||
|
- **aud shape**: ACT stores `aud` as `str | list[str]`; ECT normalises to `list[str]` via `_audience_deserialize`.
|
||||||
|
- **Claims unique to ACT**: `sub`, `iss` (required string), `task`, `cap`, `del`, `oversight`, `exec_ts`, `status`, `err`.
|
||||||
|
- **Claims unique to ECT**: `ect_ext`, `inp_classification`, and policy claims inside `ect_ext` (`pol`, `pol_decision`, `compensation_required`).
|
||||||
|
|
||||||
|
## 3. Test Categories
|
||||||
|
|
||||||
|
### 3.1 Shared claim consistency (`TestSharedClaims`)
|
||||||
|
- `test_jti_format_roundtrips`: UUID-v4 jti accepted by both refimpls; non-UUID jti accepted by ACT (no UUID check) but only by ECT when `validate_uuids=False` (document the asymmetry).
|
||||||
|
- `test_wid_shared_semantics`: same wid value on an ACT Record and an ECT payload — both accept.
|
||||||
|
- `test_iat_exp_numericdate`: identical integer NumericDate accepted by both (ACT uses strict `> 0`, ECT uses `int(claims["iat"])`).
|
||||||
|
- `test_aud_string_vs_list`: string `aud` preserved by ACT, coerced to list by ECT; list form is lossless on both.
|
||||||
|
- `test_exec_act_string_both_sides`: same `exec_act` value (e.g. `read.data`) serialises identically; ACT additionally validates ABNF grammar — test that ECT accepts an ACT-grammar-legal value unchanged.
|
||||||
|
- `test_pred_array_shape`: `pred=[]`, `pred=[jti1]`, `pred=[jti1, jti2]` — both refimpls serialise/deserialise identically.
|
||||||
|
- `test_inp_hash_format_divergence` (**expected xfail/documented**): feed ACT's plain b64url output into ECT validator — expect `ValueError("ect: inp_hash/out_hash must be algorithm:base64url...")`. This pins the incompatibility so a future fix flips the test green.
|
||||||
|
- `test_inp_hash_prefixed_form`: `sha-256:<b64url>` value accepted by ECT; ACT treats it as opaque string (no validation), roundtrips without error.
|
||||||
|
- `test_out_hash_same_as_inp`: mirror the above for `out_hash`.
|
||||||
|
|
||||||
|
### 3.2 Algorithm compatibility (`TestAlgorithmMatrix`)
|
||||||
|
- `test_es256_act_record_signature_verifies_with_ect_key_resolver`: build a Phase 2 ACTRecord, sign with ES256 P-256 key. Feed the compact JWS bytes *and an ECT-shaped resolver* through `ect.verify`. Expect `ValueError("ect: invalid typ parameter")` because typ is `act+jwt`. Document: JWS/ES256 signature layer is compatible, but typ gate prevents verifier reuse as-is.
|
||||||
|
- `test_eddsa_act_record_rejected_by_ect`: Phase 2 ACTRecord signed EdDSA. ECT must reject at alg gate (`"ect: expected ES256"`). Documents the ES256-only limitation.
|
||||||
|
- `test_ect_payload_signature_verifies_with_act_crypto`: sign an ECT payload (ES256), strip to raw JWS, feed signature bytes through `act.crypto.verify` with the ECT public key. Expect success — proves the ES256 primitive is wire-compatible at the raw-sig level.
|
||||||
|
|
||||||
|
### 3.3 DAG cross-reference (`TestDagInterop`)
|
||||||
|
- `test_pred_array_referenceable_both_ways`: construct ACT Record with `pred=[ect_jti]` and an ECT payload with `pred=[act_jti]`. Both refimpls accept the arrays structurally (they're opaque strings).
|
||||||
|
- `test_mixed_dag_is_out_of_scope`: document and assert that `ACTStore` only stores ACT records and `ECTStore` only stores ECT payloads; neither is designed to resolve a `pred` jti from the other type. A bridging verifier would have to walk both stores — out of scope for refimpls.
|
||||||
|
- `test_jti_collision_across_types`: the same UUID used as `jti` in an ACT Record and an unrelated ECT payload — both refimpls accept independently; document that jti uniqueness is scoped per-token-type in the refimpls.
|
||||||
|
|
||||||
|
### 3.4 Semantic divergence (`TestClaimDivergence`)
|
||||||
|
- `test_ect_ignores_act_only_claims`: ECT `Payload.from_claims` is called on a dict that includes `sub`, `task`, `cap`, `oversight`, `exec_ts`, `status`. Expect: silently ignored (no error, no retention). Document as "ECT is lenient on unknown top-level claims".
|
||||||
|
- `test_act_ignores_ect_only_claims`: feed `ACTRecord.from_claims` a claim dict with `ect_ext`, `inp_classification`. Expect: silently ignored and not retained.
|
||||||
|
- `test_exec_act_not_validated_against_cap_in_ect`: ACT Record with `exec_act="read.data"` and `cap=[{"action":"write.result"}]` → ACT verifier raises `ACTCapabilityError`. Same `exec_act` in an ECT payload with no `cap` → ECT accepts. Documents the cap-validation asymmetry; guards against anyone accidentally copy-pasting cap logic into ECT.
|
||||||
|
- `test_act_requires_status_ect_does_not`: ACTRecord without `status` → `ACTValidationError`. ECT without `status` → accepted.
|
||||||
|
|
||||||
|
### 3.5 Anti-goals (encoded as negative tests)
|
||||||
|
- `test_act_jwt_typ_rejected_by_ect`: ACT compact with `typ=act+jwt` fed to `ect.verify` → MUST raise "invalid typ parameter".
|
||||||
|
- `test_exec_jwt_typ_rejected_by_act`: ECT compact with `typ=exec+jwt` fed to `act.decode_jws` → MUST raise `ACTValidationError` on typ check.
|
||||||
|
- `test_no_forgery_as_other_type`: explicit comment-only placeholder asserting we do not re-encode one type as the other; kept as a doc anchor.
|
||||||
|
|
||||||
|
## 4. Expected Compatibility Matrix (user-facing)
|
||||||
|
|
||||||
|
| Layer | Direction | Status | Notes |
|
||||||
|
|---|---|---|---|
|
||||||
|
| ES256 raw signature | ACT ↔ ECT | Compatible | Same JWS/ES256 primitive |
|
||||||
|
| EdDSA signature | ACT → ECT | Incompatible | ECT is ES256-only |
|
||||||
|
| `typ` header | ACT ↔ ECT | Strictly separated | By design |
|
||||||
|
| `jti`, `wid`, `iat`, `exp`, `aud`, `exec_act`, `pred` | Shared | Compatible | Identical wire shapes |
|
||||||
|
| `inp_hash`/`out_hash` | ACT → ECT | **Incompatible today** | ACT emits plain b64url, ECT requires `sha-256:<b64url>` |
|
||||||
|
| `inp_hash`/`out_hash` | ECT → ACT | Compatible | ACT treats as opaque string |
|
||||||
|
| `cap` / `exec_act` coupling | ACT-only | N/A | ECT does not enforce |
|
||||||
|
| DAG `pred` traversal | Separate stores | Manual bridging required | Refimpls do not cross-resolve |
|
||||||
|
|
||||||
|
## 5. Dependencies and Structure
|
||||||
|
|
||||||
|
Both packages must be importable in a single venv:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip install -e packages/act packages/ect packages/interop[dev]
|
||||||
|
```
|
||||||
|
|
||||||
|
Proposed layout:
|
||||||
|
|
||||||
|
```
|
||||||
|
packages/
|
||||||
|
act/ …
|
||||||
|
ect/ …
|
||||||
|
interop/
|
||||||
|
pyproject.toml # declares ietf-act, ietf-ect as deps
|
||||||
|
tests/
|
||||||
|
__init__.py
|
||||||
|
conftest.py # shared ES256 keypair + resolver fixtures
|
||||||
|
test_interop.py # classes Test{SharedClaims,AlgorithmMatrix,DagInterop,ClaimDivergence,AntiGoals}
|
||||||
|
README.md # published compatibility matrix
|
||||||
|
```
|
||||||
|
|
||||||
|
`conftest.py` exposes fixtures: `es256_keypair`, `act_record_builder`, `ect_payload_builder`, `dual_resolver` (one kid → same ES256 pubkey for both refimpls).
|
||||||
|
|
||||||
|
## 6. What the Compatibility Matrix Docs Should Tell Users
|
||||||
|
|
||||||
|
- **Do** reuse ES256 key material across ACT and ECT deployments — the signing primitive is identical.
|
||||||
|
- **Do not** feed ACT compact tokens to an ECT verifier or vice versa; `typ` gates are deliberate.
|
||||||
|
- **Do** treat `jti`, `wid`, `pred`, `exec_act` as semantically aligned when building cross-type audit logs.
|
||||||
|
- **Do not** rely on `inp_hash`/`out_hash` being portable today — raise a spec issue if portability matters for your deployment.
|
||||||
|
- **Do not** expect ECT to enforce ACT's `cap`/`exec_act` coupling — authorization remains an ACT concern.
|
||||||
|
- **Open question for spec editors**: align hash encoding (plain b64url vs prefixed), and decide whether Ed25519 should be optional-to-support for ECT.
|
||||||
119
workspace/packages/act/act/__init__.py
Normal file
119
workspace/packages/act/act/__init__.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
"""Agent Context Token (ACT) — Reference Implementation.
|
||||||
|
|
||||||
|
A JWT-based format for autonomous AI agents that unifies authorization
|
||||||
|
and execution accountability in a single token lifecycle.
|
||||||
|
|
||||||
|
Reference: draft-nennemann-act-01.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .errors import (
|
||||||
|
ACTAudienceMismatchError,
|
||||||
|
ACTCapabilityError,
|
||||||
|
ACTDAGError,
|
||||||
|
ACTDelegationError,
|
||||||
|
ACTError,
|
||||||
|
ACTExpiredError,
|
||||||
|
ACTKeyResolutionError,
|
||||||
|
ACTLedgerImmutabilityError,
|
||||||
|
ACTPhaseError,
|
||||||
|
ACTPrivilegeEscalationError,
|
||||||
|
ACTSignatureError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
from .token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
DelegationEntry,
|
||||||
|
ErrorClaim,
|
||||||
|
Oversight,
|
||||||
|
TaskClaim,
|
||||||
|
decode_jws,
|
||||||
|
encode_jws,
|
||||||
|
parse_token,
|
||||||
|
)
|
||||||
|
from .crypto import (
|
||||||
|
ACTKeyResolver,
|
||||||
|
KeyRegistry,
|
||||||
|
PublicKey,
|
||||||
|
PrivateKey,
|
||||||
|
X509TrustStore,
|
||||||
|
b64url_sha256,
|
||||||
|
compute_sha256,
|
||||||
|
did_key_from_ed25519,
|
||||||
|
generate_ed25519_keypair,
|
||||||
|
generate_p256_keypair,
|
||||||
|
resolve_did_key,
|
||||||
|
sign,
|
||||||
|
verify,
|
||||||
|
)
|
||||||
|
from .lifecycle import transition_to_record
|
||||||
|
from .delegation import (
|
||||||
|
create_delegated_mandate,
|
||||||
|
verify_capability_subset,
|
||||||
|
verify_delegation_chain,
|
||||||
|
)
|
||||||
|
from .dag import validate_dag, ACTStore
|
||||||
|
from .ledger import ACTLedger
|
||||||
|
from .verify import ACTVerifier
|
||||||
|
from .vectors import generate_vectors, validate_vectors
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
# Errors
|
||||||
|
"ACTError",
|
||||||
|
"ACTValidationError",
|
||||||
|
"ACTSignatureError",
|
||||||
|
"ACTExpiredError",
|
||||||
|
"ACTAudienceMismatchError",
|
||||||
|
"ACTCapabilityError",
|
||||||
|
"ACTDelegationError",
|
||||||
|
"ACTDAGError",
|
||||||
|
"ACTPhaseError",
|
||||||
|
"ACTKeyResolutionError",
|
||||||
|
"ACTLedgerImmutabilityError",
|
||||||
|
"ACTPrivilegeEscalationError",
|
||||||
|
# Token structures
|
||||||
|
"ACTMandate",
|
||||||
|
"ACTRecord",
|
||||||
|
"TaskClaim",
|
||||||
|
"Capability",
|
||||||
|
"Delegation",
|
||||||
|
"DelegationEntry",
|
||||||
|
"Oversight",
|
||||||
|
"ErrorClaim",
|
||||||
|
# Token serialization
|
||||||
|
"encode_jws",
|
||||||
|
"decode_jws",
|
||||||
|
"parse_token",
|
||||||
|
# Crypto
|
||||||
|
"generate_ed25519_keypair",
|
||||||
|
"generate_p256_keypair",
|
||||||
|
"sign",
|
||||||
|
"verify",
|
||||||
|
"compute_sha256",
|
||||||
|
"b64url_sha256",
|
||||||
|
"resolve_did_key",
|
||||||
|
"did_key_from_ed25519",
|
||||||
|
"KeyRegistry",
|
||||||
|
"X509TrustStore",
|
||||||
|
"ACTKeyResolver",
|
||||||
|
"PublicKey",
|
||||||
|
"PrivateKey",
|
||||||
|
# Lifecycle
|
||||||
|
"transition_to_record",
|
||||||
|
# Delegation
|
||||||
|
"create_delegated_mandate",
|
||||||
|
"verify_capability_subset",
|
||||||
|
"verify_delegation_chain",
|
||||||
|
# DAG
|
||||||
|
"validate_dag",
|
||||||
|
"ACTStore",
|
||||||
|
# Ledger
|
||||||
|
"ACTLedger",
|
||||||
|
# Verify
|
||||||
|
"ACTVerifier",
|
||||||
|
# Vectors
|
||||||
|
"generate_vectors",
|
||||||
|
"validate_vectors",
|
||||||
|
]
|
||||||
467
workspace/packages/act/act/crypto.py
Normal file
467
workspace/packages/act/act/crypto.py
Normal file
@@ -0,0 +1,467 @@
|
|||||||
|
"""ACT cryptographic primitives and key management.
|
||||||
|
|
||||||
|
Provides sign/verify operations and key resolution across all three
|
||||||
|
ACT trust tiers:
|
||||||
|
- Tier 1: Pre-shared Ed25519 and P-256 keys
|
||||||
|
- Tier 2: PKI / X.509 certificate chains
|
||||||
|
- Tier 3: DID (did:key self-contained, did:web via resolver callback)
|
||||||
|
|
||||||
|
Reference: ACT §5 (Trust Model), §8 (Verification Procedure).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import hashlib
|
||||||
|
import re
|
||||||
|
from typing import Any, Callable, Protocol
|
||||||
|
|
||||||
|
from cryptography.exceptions import InvalidSignature
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.ec import (
|
||||||
|
ECDSA,
|
||||||
|
SECP256R1,
|
||||||
|
EllipticCurvePrivateKey,
|
||||||
|
EllipticCurvePublicKey,
|
||||||
|
generate_private_key as ec_generate_private_key,
|
||||||
|
)
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.ed25519 import (
|
||||||
|
Ed25519PrivateKey,
|
||||||
|
Ed25519PublicKey,
|
||||||
|
)
|
||||||
|
from cryptography.hazmat.primitives.hashes import SHA256
|
||||||
|
from cryptography.x509 import (
|
||||||
|
Certificate,
|
||||||
|
load_der_x509_certificate,
|
||||||
|
)
|
||||||
|
|
||||||
|
from .errors import (
|
||||||
|
ACTKeyResolutionError,
|
||||||
|
ACTSignatureError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Type aliases for public/private keys supported by ACT.
|
||||||
|
PublicKey = Ed25519PublicKey | EllipticCurvePublicKey
|
||||||
|
PrivateKey = Ed25519PrivateKey | EllipticCurvePrivateKey
|
||||||
|
|
||||||
|
# Callback type for DID:web resolution.
|
||||||
|
DIDResolver = Callable[[str], PublicKey | None]
|
||||||
|
|
||||||
|
|
||||||
|
def generate_ed25519_keypair() -> tuple[Ed25519PrivateKey, Ed25519PublicKey]:
|
||||||
|
"""Generate an Ed25519 key pair for ACT signing.
|
||||||
|
|
||||||
|
Returns a (private_key, public_key) tuple. The private key object
|
||||||
|
carries its associated public key per ACT security requirements.
|
||||||
|
|
||||||
|
Reference: ACT §5.2 (Tier 1 pre-shared keys).
|
||||||
|
"""
|
||||||
|
private_key = Ed25519PrivateKey.generate()
|
||||||
|
return private_key, private_key.public_key()
|
||||||
|
|
||||||
|
|
||||||
|
def generate_p256_keypair() -> tuple[EllipticCurvePrivateKey, EllipticCurvePublicKey]:
|
||||||
|
"""Generate a P-256 (ES256) key pair for ACT signing.
|
||||||
|
|
||||||
|
Returns a (private_key, public_key) tuple.
|
||||||
|
|
||||||
|
Reference: ACT §5.2 (Tier 1 pre-shared keys).
|
||||||
|
"""
|
||||||
|
private_key = ec_generate_private_key(SECP256R1())
|
||||||
|
return private_key, private_key.public_key()
|
||||||
|
|
||||||
|
|
||||||
|
def sign(private_key: PrivateKey, data: bytes) -> bytes:
|
||||||
|
"""Sign data using the appropriate algorithm for the key type.
|
||||||
|
|
||||||
|
Uses Ed25519 for Ed25519PrivateKey, ECDSA with SHA-256 for P-256.
|
||||||
|
Returns raw signature bytes (for Ed25519: 64 bytes; for ES256:
|
||||||
|
raw r||s format per RFC 7518 §3.4).
|
||||||
|
|
||||||
|
Reference: ACT §5, RFC 7515 §5.1.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If the key type is not supported.
|
||||||
|
"""
|
||||||
|
if isinstance(private_key, Ed25519PrivateKey):
|
||||||
|
return private_key.sign(data)
|
||||||
|
elif isinstance(private_key, EllipticCurvePrivateKey):
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.utils import (
|
||||||
|
decode_dss_signature,
|
||||||
|
)
|
||||||
|
# Sign with DER-encoded signature, then convert to raw r||s
|
||||||
|
der_sig = private_key.sign(data, ECDSA(SHA256()))
|
||||||
|
r, s = decode_dss_signature(der_sig)
|
||||||
|
# P-256 uses 32-byte integers
|
||||||
|
return r.to_bytes(32, "big") + s.to_bytes(32, "big")
|
||||||
|
else:
|
||||||
|
raise ACTValidationError(f"Unsupported key type: {type(private_key)}")
|
||||||
|
|
||||||
|
|
||||||
|
def verify(public_key: PublicKey, signature: bytes, data: bytes) -> None:
|
||||||
|
"""Verify a signature against the given public key and data.
|
||||||
|
|
||||||
|
Reference: ACT §8.1 step 5.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTSignatureError: If the signature is invalid.
|
||||||
|
ACTValidationError: If the key type is not supported.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if isinstance(public_key, Ed25519PublicKey):
|
||||||
|
public_key.verify(signature, data)
|
||||||
|
elif isinstance(public_key, EllipticCurvePublicKey):
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.utils import (
|
||||||
|
encode_dss_signature,
|
||||||
|
)
|
||||||
|
# Convert raw r||s back to DER
|
||||||
|
r = int.from_bytes(signature[:32], "big")
|
||||||
|
s = int.from_bytes(signature[32:], "big")
|
||||||
|
der_sig = encode_dss_signature(r, s)
|
||||||
|
public_key.verify(der_sig, data, ECDSA(SHA256()))
|
||||||
|
else:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Unsupported key type: {type(public_key)}"
|
||||||
|
)
|
||||||
|
except InvalidSignature as e:
|
||||||
|
raise ACTSignatureError("Signature verification failed") from e
|
||||||
|
|
||||||
|
|
||||||
|
def compute_sha256(data: bytes) -> bytes:
|
||||||
|
"""Compute SHA-256 hash of data.
|
||||||
|
|
||||||
|
Used for delegation chain signatures and inp_hash/out_hash claims.
|
||||||
|
|
||||||
|
Reference: ACT §6.1 (delegation sig), §4.3 (inp_hash, out_hash).
|
||||||
|
"""
|
||||||
|
return hashlib.sha256(data).digest()
|
||||||
|
|
||||||
|
|
||||||
|
def b64url_sha256(data: bytes) -> str:
|
||||||
|
"""Compute base64url(SHA-256(data)) without padding.
|
||||||
|
|
||||||
|
Used for inp_hash and out_hash claims.
|
||||||
|
|
||||||
|
Reference: ACT §4.3.
|
||||||
|
"""
|
||||||
|
digest = compute_sha256(data)
|
||||||
|
return base64.urlsafe_b64encode(digest).rstrip(b"=").decode("ascii")
|
||||||
|
|
||||||
|
|
||||||
|
def x509_kid(cert_der: bytes) -> str:
|
||||||
|
"""Compute the Tier 2 kid: SHA-256 thumbprint of DER certificate.
|
||||||
|
|
||||||
|
Reference: ACT §5.3 (Tier 2 kid format).
|
||||||
|
"""
|
||||||
|
return hashlib.sha256(cert_der).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
|
class KeyRegistry:
|
||||||
|
"""Tier 1 pre-shared key registry.
|
||||||
|
|
||||||
|
Maps kid strings to public keys. Configured at initialization time
|
||||||
|
with no external resolution needed.
|
||||||
|
|
||||||
|
Reference: ACT §5.2 (Tier 1 Pre-Shared Keys).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self._keys: dict[str, PublicKey] = {}
|
||||||
|
|
||||||
|
def register(self, kid: str, public_key: PublicKey) -> None:
|
||||||
|
"""Register a public key under the given kid.
|
||||||
|
|
||||||
|
Reference: ACT §5.2.
|
||||||
|
"""
|
||||||
|
self._keys[kid] = public_key
|
||||||
|
|
||||||
|
def get(self, kid: str) -> PublicKey | None:
|
||||||
|
"""Retrieve the public key for a kid, or None if not found."""
|
||||||
|
return self._keys.get(kid)
|
||||||
|
|
||||||
|
def __contains__(self, kid: str) -> bool:
|
||||||
|
return kid in self._keys
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self._keys)
|
||||||
|
|
||||||
|
|
||||||
|
class X509TrustStore:
|
||||||
|
"""Tier 2 PKI/X.509 trust store.
|
||||||
|
|
||||||
|
Holds trusted CA certificates and resolves kid (certificate
|
||||||
|
thumbprint) to public keys. Supports x5c header chain validation.
|
||||||
|
|
||||||
|
Reference: ACT §5.3 (Tier 2 PKI).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self._trusted_certs: dict[str, Certificate] = {}
|
||||||
|
|
||||||
|
def add_trusted_cert(self, cert: Certificate) -> str:
|
||||||
|
"""Add a trusted certificate to the store.
|
||||||
|
|
||||||
|
Returns the kid (SHA-256 thumbprint of DER encoding).
|
||||||
|
|
||||||
|
Reference: ACT §5.3.
|
||||||
|
"""
|
||||||
|
from cryptography.hazmat.primitives.serialization import Encoding
|
||||||
|
der_bytes = cert.public_bytes(Encoding.DER)
|
||||||
|
kid = x509_kid(der_bytes)
|
||||||
|
self._trusted_certs[kid] = cert
|
||||||
|
return kid
|
||||||
|
|
||||||
|
def resolve(self, kid: str) -> PublicKey | None:
|
||||||
|
"""Resolve kid to a public key from a trusted certificate.
|
||||||
|
|
||||||
|
Reference: ACT §5.3, §8.1 step 4.
|
||||||
|
"""
|
||||||
|
cert = self._trusted_certs.get(kid)
|
||||||
|
if cert is None:
|
||||||
|
return None
|
||||||
|
pub = cert.public_key()
|
||||||
|
if isinstance(pub, (Ed25519PublicKey, EllipticCurvePublicKey)):
|
||||||
|
return pub
|
||||||
|
return None
|
||||||
|
|
||||||
|
def resolve_x5c(self, x5c: list[str]) -> PublicKey | None:
|
||||||
|
"""Resolve public key from x5c certificate chain.
|
||||||
|
|
||||||
|
The first entry in x5c is the end-entity certificate.
|
||||||
|
Validates that the chain terminates in a trusted CA.
|
||||||
|
|
||||||
|
Reference: ACT §4.1 (x5c header), §5.3.
|
||||||
|
"""
|
||||||
|
if not x5c:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
# Decode certificates from base64 DER
|
||||||
|
certs = [
|
||||||
|
load_der_x509_certificate(base64.b64decode(c)) for c in x5c
|
||||||
|
]
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Check if any cert in the chain is in our trust store
|
||||||
|
from cryptography.hazmat.primitives.serialization import Encoding
|
||||||
|
for cert in certs:
|
||||||
|
der_bytes = cert.public_bytes(Encoding.DER)
|
||||||
|
kid = x509_kid(der_bytes)
|
||||||
|
if kid in self._trusted_certs:
|
||||||
|
# End-entity cert is the first one
|
||||||
|
ee_pub = certs[0].public_key()
|
||||||
|
if isinstance(ee_pub, (Ed25519PublicKey, EllipticCurvePublicKey)):
|
||||||
|
return ee_pub
|
||||||
|
return None
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# --- Tier 3: DID Support ---
|
||||||
|
|
||||||
|
# Multicodec prefixes for did:key
|
||||||
|
_ED25519_MULTICODEC = b"\xed\x01"
|
||||||
|
_P256_MULTICODEC = b"\x80\x24"
|
||||||
|
|
||||||
|
|
||||||
|
def _multibase_decode(encoded: str) -> bytes:
|
||||||
|
"""Decode a multibase-encoded string (base58btc 'z' prefix).
|
||||||
|
|
||||||
|
Reference: ACT §5.4 (Tier 3 DID:key).
|
||||||
|
"""
|
||||||
|
if not encoded.startswith("z"):
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Unsupported multibase encoding prefix: {encoded[0]!r}"
|
||||||
|
)
|
||||||
|
return _base58btc_decode(encoded[1:])
|
||||||
|
|
||||||
|
|
||||||
|
def _base58btc_decode(s: str) -> bytes:
|
||||||
|
"""Decode a base58btc string."""
|
||||||
|
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
|
||||||
|
n = 0
|
||||||
|
for ch in s:
|
||||||
|
idx = alphabet.index(ch)
|
||||||
|
n = n * 58 + idx
|
||||||
|
# Compute byte length
|
||||||
|
byte_length = (n.bit_length() + 7) // 8
|
||||||
|
result = n.to_bytes(byte_length, "big") if byte_length > 0 else b""
|
||||||
|
# Preserve leading zeros
|
||||||
|
leading_zeros = len(s) - len(s.lstrip("1"))
|
||||||
|
return b"\x00" * leading_zeros + result
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_did_key(did: str) -> PublicKey:
|
||||||
|
"""Resolve a did:key identifier to a public key.
|
||||||
|
|
||||||
|
Supports Ed25519 and P-256 key types. The did:key method is
|
||||||
|
self-contained — no external resolution is needed.
|
||||||
|
|
||||||
|
Reference: ACT §5.4 (Tier 3 DID:key).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTKeyResolutionError: If the DID cannot be resolved.
|
||||||
|
"""
|
||||||
|
# Strip fragment if present (e.g., did:key:z6Mk...#z6Mk...)
|
||||||
|
did_base = did.split("#")[0]
|
||||||
|
|
||||||
|
if not did_base.startswith("did:key:"):
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Not a did:key identifier: {did!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
multibase_value = did_base[len("did:key:"):]
|
||||||
|
try:
|
||||||
|
decoded = _multibase_decode(multibase_value)
|
||||||
|
except Exception as e:
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Failed to decode did:key multibase value: {e}"
|
||||||
|
) from e
|
||||||
|
|
||||||
|
if decoded[:2] == _ED25519_MULTICODEC:
|
||||||
|
raw_key = decoded[2:]
|
||||||
|
if len(raw_key) != 32:
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Ed25519 public key must be 32 bytes, got {len(raw_key)}"
|
||||||
|
)
|
||||||
|
return Ed25519PublicKey.from_public_bytes(raw_key)
|
||||||
|
elif decoded[:2] == _P256_MULTICODEC:
|
||||||
|
raw_key = decoded[2:]
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.ec import (
|
||||||
|
EllipticCurvePublicKey as ECPub,
|
||||||
|
)
|
||||||
|
from cryptography.hazmat.primitives.serialization import (
|
||||||
|
load_der_public_key,
|
||||||
|
)
|
||||||
|
# P-256 compressed point (33 bytes) or uncompressed (65 bytes)
|
||||||
|
# Wrap in SubjectPublicKeyInfo for loading
|
||||||
|
try:
|
||||||
|
return EllipticCurvePublicKey.from_encoded_point(
|
||||||
|
SECP256R1(), raw_key
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Failed to load P-256 key from did:key: {e}"
|
||||||
|
) from e
|
||||||
|
else:
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Unsupported multicodec prefix in did:key: {decoded[:2]!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def did_key_from_ed25519(public_key: Ed25519PublicKey) -> str:
|
||||||
|
"""Create a did:key identifier from an Ed25519 public key.
|
||||||
|
|
||||||
|
Reference: ACT §5.4 (Tier 3 DID:key).
|
||||||
|
"""
|
||||||
|
from cryptography.hazmat.primitives.serialization import (
|
||||||
|
Encoding,
|
||||||
|
PublicFormat,
|
||||||
|
)
|
||||||
|
raw = public_key.public_bytes(Encoding.Raw, PublicFormat.Raw)
|
||||||
|
multicodec = _ED25519_MULTICODEC + raw
|
||||||
|
encoded = "z" + _base58btc_encode(multicodec)
|
||||||
|
return f"did:key:{encoded}"
|
||||||
|
|
||||||
|
|
||||||
|
def _base58btc_encode(data: bytes) -> str:
|
||||||
|
"""Encode bytes as base58btc."""
|
||||||
|
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
|
||||||
|
# Count leading zeros
|
||||||
|
leading_zeros = 0
|
||||||
|
for b in data:
|
||||||
|
if b == 0:
|
||||||
|
leading_zeros += 1
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
n = int.from_bytes(data, "big")
|
||||||
|
if n == 0:
|
||||||
|
return "1" * leading_zeros
|
||||||
|
chars: list[str] = []
|
||||||
|
while n > 0:
|
||||||
|
n, remainder = divmod(n, 58)
|
||||||
|
chars.append(alphabet[remainder])
|
||||||
|
return "1" * leading_zeros + "".join(reversed(chars))
|
||||||
|
|
||||||
|
|
||||||
|
class ACTKeyResolver:
|
||||||
|
"""Unified key resolver across all trust tiers.
|
||||||
|
|
||||||
|
Tries Tier 1 (pre-shared), then Tier 2 (X.509), then Tier 3 (DID)
|
||||||
|
to resolve a kid to a public key.
|
||||||
|
|
||||||
|
Reference: ACT §5 (Trust Model), §8.1 step 4.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
registry: KeyRegistry | None = None,
|
||||||
|
x509_store: X509TrustStore | None = None,
|
||||||
|
did_web_resolver: DIDResolver | None = None,
|
||||||
|
) -> None:
|
||||||
|
self._registry = registry or KeyRegistry()
|
||||||
|
self._x509_store = x509_store or X509TrustStore()
|
||||||
|
self._did_web_resolver = did_web_resolver
|
||||||
|
|
||||||
|
@property
|
||||||
|
def registry(self) -> KeyRegistry:
|
||||||
|
"""Access the Tier 1 key registry."""
|
||||||
|
return self._registry
|
||||||
|
|
||||||
|
@property
|
||||||
|
def x509_store(self) -> X509TrustStore:
|
||||||
|
"""Access the Tier 2 X.509 trust store."""
|
||||||
|
return self._x509_store
|
||||||
|
|
||||||
|
def resolve(
|
||||||
|
self,
|
||||||
|
kid: str,
|
||||||
|
header: dict[str, Any] | None = None,
|
||||||
|
) -> PublicKey:
|
||||||
|
"""Resolve a kid to a public key, trying all configured tiers.
|
||||||
|
|
||||||
|
Resolution order:
|
||||||
|
1. Tier 1: Pre-shared key registry lookup by kid
|
||||||
|
2. Tier 2: X.509 certificate lookup by kid (thumbprint)
|
||||||
|
or x5c header chain validation
|
||||||
|
3. Tier 3: DID resolution (did:key or did:web)
|
||||||
|
|
||||||
|
Reference: ACT §5 (Trust Model), §8.1 step 4.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTKeyResolutionError: If no key can be resolved for the kid.
|
||||||
|
"""
|
||||||
|
header = header or {}
|
||||||
|
|
||||||
|
# Tier 1: Pre-shared keys
|
||||||
|
key = self._registry.get(kid)
|
||||||
|
if key is not None:
|
||||||
|
return key
|
||||||
|
|
||||||
|
# Tier 2: X.509
|
||||||
|
key = self._x509_store.resolve(kid)
|
||||||
|
if key is not None:
|
||||||
|
return key
|
||||||
|
|
||||||
|
# Tier 2: x5c chain in header
|
||||||
|
x5c = header.get("x5c")
|
||||||
|
if x5c:
|
||||||
|
key = self._x509_store.resolve_x5c(x5c)
|
||||||
|
if key is not None:
|
||||||
|
return key
|
||||||
|
|
||||||
|
# Tier 3: DID
|
||||||
|
did_value = header.get("did") or kid
|
||||||
|
if did_value.startswith("did:key:"):
|
||||||
|
try:
|
||||||
|
return resolve_did_key(did_value)
|
||||||
|
except ACTKeyResolutionError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if did_value.startswith("did:web:") and self._did_web_resolver:
|
||||||
|
resolved = self._did_web_resolver(did_value)
|
||||||
|
if resolved is not None:
|
||||||
|
return resolved
|
||||||
|
|
||||||
|
raise ACTKeyResolutionError(
|
||||||
|
f"Cannot resolve kid {kid!r} to a public key via any trust tier"
|
||||||
|
)
|
||||||
136
workspace/packages/act/act/dag.py
Normal file
136
workspace/packages/act/act/dag.py
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
"""ACT DAG validation for Phase 2 execution records.
|
||||||
|
|
||||||
|
Validates the directed acyclic graph formed by pred (predecessor) references
|
||||||
|
in Phase 2 ACTs, ensuring uniqueness, predecessor existence, temporal ordering,
|
||||||
|
acyclicity, and capability consistency.
|
||||||
|
|
||||||
|
Reference: ACT §7 (DAG Structure and Causal Ordering).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Protocol
|
||||||
|
|
||||||
|
from .errors import ACTCapabilityError, ACTDAGError
|
||||||
|
from .token import ACTRecord
|
||||||
|
|
||||||
|
# Maximum ancestor traversal limit for cycle detection — ACT §7.1 step 4.
|
||||||
|
MAX_TRAVERSAL_LIMIT: int = 10_000
|
||||||
|
|
||||||
|
# Clock skew tolerance for temporal ordering — ACT §7.1 step 3.
|
||||||
|
DAG_CLOCK_SKEW_TOLERANCE: int = 30
|
||||||
|
|
||||||
|
|
||||||
|
class ACTStore(Protocol):
|
||||||
|
"""Protocol for an ACT store used in DAG validation.
|
||||||
|
|
||||||
|
Any object implementing get() and has() can serve as the store.
|
||||||
|
The ACTLedger in ledger.py implements this protocol.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get(self, jti: str) -> ACTRecord | None:
|
||||||
|
"""Retrieve a Phase 2 ACT record by jti."""
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
def validate_dag(
|
||||||
|
record: ACTRecord,
|
||||||
|
store: ACTStore,
|
||||||
|
*,
|
||||||
|
clock_skew_tolerance: int = DAG_CLOCK_SKEW_TOLERANCE,
|
||||||
|
) -> None:
|
||||||
|
"""Validate the DAG constraints for a Phase 2 execution record.
|
||||||
|
|
||||||
|
Performs all five DAG validation checks defined in ACT §7.1:
|
||||||
|
1. jti uniqueness within wid scope (or globally)
|
||||||
|
2. Predecessor existence in store
|
||||||
|
3. Temporal ordering with clock skew tolerance
|
||||||
|
4. Acyclicity (max traversal limit)
|
||||||
|
5. Capability consistency (exec_act matches cap[].action)
|
||||||
|
|
||||||
|
Reference: ACT §7.1 (DAG Validation).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
record: The Phase 2 ACTRecord to validate.
|
||||||
|
store: An ACT store providing get() for predecessor lookup.
|
||||||
|
clock_skew_tolerance: Seconds of allowed clock skew (default 30).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTDAGError: If any DAG constraint is violated.
|
||||||
|
ACTCapabilityError: If exec_act does not match cap actions.
|
||||||
|
"""
|
||||||
|
# Step 1: jti uniqueness — ACT §7.1 step 1
|
||||||
|
existing = store.get(record.jti)
|
||||||
|
if existing is not None:
|
||||||
|
raise ACTDAGError(
|
||||||
|
f"Duplicate jti {record.jti!r} already exists in store"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 5: Capability consistency — ACT §7.1 step 5
|
||||||
|
cap_actions = {c.action for c in record.cap}
|
||||||
|
if record.exec_act not in cap_actions:
|
||||||
|
raise ACTCapabilityError(
|
||||||
|
f"exec_act {record.exec_act!r} does not match any "
|
||||||
|
f"cap[].action: {sorted(cap_actions)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 2 & 3: Predecessor existence and temporal ordering
|
||||||
|
for pred_jti in record.pred:
|
||||||
|
parent = store.get(pred_jti)
|
||||||
|
if parent is None:
|
||||||
|
raise ACTDAGError(
|
||||||
|
f"Predecessor jti {pred_jti!r} not found in store"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Temporal ordering: predecessor.exec_ts < child.exec_ts + tolerance
|
||||||
|
if parent.exec_ts >= record.exec_ts + clock_skew_tolerance:
|
||||||
|
raise ACTDAGError(
|
||||||
|
f"Temporal ordering violation: predecessor {pred_jti!r} "
|
||||||
|
f"exec_ts={parent.exec_ts} >= child exec_ts="
|
||||||
|
f"{record.exec_ts} + tolerance={clock_skew_tolerance}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 4: Acyclicity — ACT §7.1 step 4
|
||||||
|
_check_acyclicity(record.jti, record.pred, store)
|
||||||
|
|
||||||
|
|
||||||
|
def _check_acyclicity(
|
||||||
|
current_jti: str,
|
||||||
|
pred_jtis: list[str],
|
||||||
|
store: ACTStore,
|
||||||
|
) -> None:
|
||||||
|
"""Check that following pred references does not lead back to current_jti.
|
||||||
|
|
||||||
|
Uses breadth-first traversal with a maximum node limit.
|
||||||
|
|
||||||
|
Reference: ACT §7.1 step 4.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTDAGError: If a cycle is detected or traversal limit exceeded.
|
||||||
|
"""
|
||||||
|
visited: set[str] = set()
|
||||||
|
queue: list[str] = list(pred_jtis)
|
||||||
|
nodes_visited = 0
|
||||||
|
|
||||||
|
while queue:
|
||||||
|
if nodes_visited >= MAX_TRAVERSAL_LIMIT:
|
||||||
|
raise ACTDAGError(
|
||||||
|
f"DAG traversal limit ({MAX_TRAVERSAL_LIMIT}) exceeded; "
|
||||||
|
f"possible cycle or excessively deep DAG"
|
||||||
|
)
|
||||||
|
|
||||||
|
jti = queue.pop(0)
|
||||||
|
if jti == current_jti:
|
||||||
|
raise ACTDAGError(
|
||||||
|
f"DAG cycle detected: jti {current_jti!r} appears in "
|
||||||
|
f"its own ancestor chain"
|
||||||
|
)
|
||||||
|
|
||||||
|
if jti in visited:
|
||||||
|
continue
|
||||||
|
visited.add(jti)
|
||||||
|
nodes_visited += 1
|
||||||
|
|
||||||
|
parent = store.get(jti)
|
||||||
|
if parent is not None:
|
||||||
|
queue.extend(parent.pred)
|
||||||
333
workspace/packages/act/act/delegation.py
Normal file
333
workspace/packages/act/act/delegation.py
Normal file
@@ -0,0 +1,333 @@
|
|||||||
|
"""ACT delegation chain construction and verification.
|
||||||
|
|
||||||
|
Handles peer-to-peer delegation where Agent A authorizes Agent B
|
||||||
|
with reduced privileges, building a cryptographic chain of authority.
|
||||||
|
|
||||||
|
Reference: ACT §6 (Delegation Chain).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .crypto import (
|
||||||
|
PrivateKey,
|
||||||
|
PublicKey,
|
||||||
|
compute_sha256,
|
||||||
|
sign as crypto_sign,
|
||||||
|
verify as crypto_verify,
|
||||||
|
)
|
||||||
|
from .errors import (
|
||||||
|
ACTDelegationError,
|
||||||
|
ACTPrivilegeEscalationError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
from .token import (
|
||||||
|
ACTMandate,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
DelegationEntry,
|
||||||
|
_b64url_encode,
|
||||||
|
_b64url_decode,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def create_delegated_mandate(
|
||||||
|
parent_mandate: ACTMandate,
|
||||||
|
parent_compact: str,
|
||||||
|
delegator_private_key: PrivateKey,
|
||||||
|
*,
|
||||||
|
sub: str,
|
||||||
|
kid: str,
|
||||||
|
iss: str,
|
||||||
|
aud: str | list[str],
|
||||||
|
iat: int,
|
||||||
|
exp: int,
|
||||||
|
jti: str,
|
||||||
|
cap: list[Capability],
|
||||||
|
task: Any,
|
||||||
|
alg: str = "EdDSA",
|
||||||
|
wid: str | None = None,
|
||||||
|
max_depth: int | None = None,
|
||||||
|
oversight: Any | None = None,
|
||||||
|
) -> tuple[ACTMandate, str]:
|
||||||
|
"""Create a delegated ACT mandate from a parent mandate.
|
||||||
|
|
||||||
|
Agent A (delegator) creates a new mandate for Agent B (sub) with
|
||||||
|
reduced privileges. The delegation chain is extended with a new
|
||||||
|
entry linking back to the parent ACT.
|
||||||
|
|
||||||
|
Reference: ACT §6.1 (Peer-to-Peer Delegation).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
parent_mandate: The parent ACT that authorizes delegation.
|
||||||
|
parent_compact: JWS compact serialization of the parent ACT.
|
||||||
|
delegator_private_key: The delegator's private key for chain sig.
|
||||||
|
sub: Target agent identifier.
|
||||||
|
kid: Key identifier for the new mandate's signing key.
|
||||||
|
iss: Issuer identifier (the delegator).
|
||||||
|
aud: Audience for the new mandate.
|
||||||
|
iat: Issuance time.
|
||||||
|
exp: Expiration time.
|
||||||
|
jti: Unique identifier for the new mandate.
|
||||||
|
cap: Capabilities (must be subset of parent).
|
||||||
|
task: TaskClaim for the new mandate.
|
||||||
|
alg: Algorithm (default EdDSA).
|
||||||
|
wid: Workflow identifier (optional).
|
||||||
|
max_depth: Max delegation depth (must be <= parent's).
|
||||||
|
oversight: Oversight claim (optional).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (ACTMandate, needs to be signed by delegator).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTDelegationError: If delegation depth would exceed max_depth.
|
||||||
|
ACTPrivilegeEscalationError: If cap exceeds parent capabilities.
|
||||||
|
"""
|
||||||
|
# Determine parent delegation state
|
||||||
|
if parent_mandate.delegation is not None:
|
||||||
|
parent_depth = parent_mandate.delegation.depth
|
||||||
|
parent_max_depth = parent_mandate.delegation.max_depth
|
||||||
|
parent_chain = list(parent_mandate.delegation.chain)
|
||||||
|
else:
|
||||||
|
# Root mandate without del claim — delegation not permitted
|
||||||
|
raise ACTDelegationError(
|
||||||
|
"Parent mandate has no 'del' claim; delegation is not permitted"
|
||||||
|
)
|
||||||
|
|
||||||
|
new_depth = parent_depth + 1
|
||||||
|
|
||||||
|
# Validate depth constraints — ACT §6.3 step 3
|
||||||
|
if new_depth > parent_max_depth:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Delegation depth {new_depth} exceeds max_depth {parent_max_depth}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Validate max_depth — ACT §6.1 step 4
|
||||||
|
if max_depth is None:
|
||||||
|
effective_max_depth = parent_max_depth
|
||||||
|
else:
|
||||||
|
if max_depth > parent_max_depth:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Requested max_depth {max_depth} exceeds parent max_depth "
|
||||||
|
f"{parent_max_depth}"
|
||||||
|
)
|
||||||
|
effective_max_depth = max_depth
|
||||||
|
|
||||||
|
# Validate capability subset — ACT §6.2
|
||||||
|
verify_capability_subset(parent_mandate.cap, cap)
|
||||||
|
|
||||||
|
# Compute chain entry signature — ACT §6.1 step 5
|
||||||
|
parent_hash = compute_sha256(parent_compact.encode("utf-8"))
|
||||||
|
chain_sig = crypto_sign(delegator_private_key, parent_hash)
|
||||||
|
chain_sig_b64 = _b64url_encode(chain_sig)
|
||||||
|
|
||||||
|
# Build new chain entry
|
||||||
|
new_entry = DelegationEntry(
|
||||||
|
delegator=iss,
|
||||||
|
jti=parent_mandate.jti,
|
||||||
|
sig=chain_sig_b64,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Extend chain — ordered root → immediate parent
|
||||||
|
new_chain = parent_chain + [new_entry]
|
||||||
|
|
||||||
|
delegation = Delegation(
|
||||||
|
depth=new_depth,
|
||||||
|
max_depth=effective_max_depth,
|
||||||
|
chain=new_chain,
|
||||||
|
)
|
||||||
|
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg=alg,
|
||||||
|
kid=kid,
|
||||||
|
iss=iss,
|
||||||
|
sub=sub,
|
||||||
|
aud=aud,
|
||||||
|
iat=iat,
|
||||||
|
exp=exp,
|
||||||
|
jti=jti,
|
||||||
|
wid=wid if wid is not None else parent_mandate.wid,
|
||||||
|
task=task,
|
||||||
|
cap=cap,
|
||||||
|
delegation=delegation,
|
||||||
|
oversight=oversight,
|
||||||
|
)
|
||||||
|
|
||||||
|
return mandate, ""
|
||||||
|
|
||||||
|
|
||||||
|
def verify_capability_subset(
|
||||||
|
parent_caps: list[Capability],
|
||||||
|
child_caps: list[Capability],
|
||||||
|
) -> None:
|
||||||
|
"""Verify that child capabilities are a subset of parent capabilities.
|
||||||
|
|
||||||
|
Each child capability action must exist in the parent. Constraints
|
||||||
|
must be at least as restrictive.
|
||||||
|
|
||||||
|
Reference: ACT §6.2 (Privilege Reduction Requirements).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTPrivilegeEscalationError: If child cap exceeds parent cap.
|
||||||
|
"""
|
||||||
|
parent_actions = {c.action: c for c in parent_caps}
|
||||||
|
|
||||||
|
for child_cap in child_caps:
|
||||||
|
if child_cap.action not in parent_actions:
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability action {child_cap.action!r} not present in "
|
||||||
|
f"parent capabilities: {sorted(parent_actions.keys())}"
|
||||||
|
)
|
||||||
|
|
||||||
|
parent_cap = parent_actions[child_cap.action]
|
||||||
|
_verify_constraints_subset(
|
||||||
|
parent_cap.constraints, child_cap.constraints, child_cap.action
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_constraints_subset(
|
||||||
|
parent_constraints: dict[str, Any] | None,
|
||||||
|
child_constraints: dict[str, Any] | None,
|
||||||
|
action: str,
|
||||||
|
) -> None:
|
||||||
|
"""Verify child constraints are at least as restrictive as parent.
|
||||||
|
|
||||||
|
Reference: ACT §6.2 (Privilege Reduction Requirements).
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- Numeric values: child must be <= parent (lower = more restrictive)
|
||||||
|
- data_sensitivity enum: child must be >= parent in ordering
|
||||||
|
- Unknown/domain-specific: must be byte-for-byte identical
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTPrivilegeEscalationError: If child constraint is less restrictive.
|
||||||
|
"""
|
||||||
|
if parent_constraints is None:
|
||||||
|
# Parent has no constraints — child may add constraints (more restrictive)
|
||||||
|
return
|
||||||
|
|
||||||
|
if child_constraints is None:
|
||||||
|
# Parent has constraints but child does not — escalation
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability {action!r}: parent has constraints but child does not"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Sensitivity ordering per ACT §6.2
|
||||||
|
_SENSITIVITY_ORDER = {
|
||||||
|
"public": 0,
|
||||||
|
"internal": 1,
|
||||||
|
"confidential": 2,
|
||||||
|
"restricted": 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
for key, parent_val in parent_constraints.items():
|
||||||
|
if key not in child_constraints:
|
||||||
|
# Missing constraint in child = less restrictive
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability {action!r}: constraint {key!r} present in "
|
||||||
|
f"parent but missing in child"
|
||||||
|
)
|
||||||
|
|
||||||
|
child_val = child_constraints[key]
|
||||||
|
|
||||||
|
if key == "data_sensitivity" or key == "data_classification_max":
|
||||||
|
# Enum comparison — higher = more restrictive
|
||||||
|
p_ord = _SENSITIVITY_ORDER.get(parent_val)
|
||||||
|
c_ord = _SENSITIVITY_ORDER.get(child_val)
|
||||||
|
if p_ord is not None and c_ord is not None:
|
||||||
|
if c_ord < p_ord:
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability {action!r}: constraint {key!r} "
|
||||||
|
f"value {child_val!r} is less restrictive than "
|
||||||
|
f"parent value {parent_val!r}"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if isinstance(parent_val, (int, float)) and isinstance(child_val, (int, float)):
|
||||||
|
# Numeric: lower/equal = more restrictive
|
||||||
|
if child_val > parent_val:
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability {action!r}: numeric constraint {key!r} "
|
||||||
|
f"value {child_val} exceeds parent value {parent_val}"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Unknown/domain-specific: must be identical — ACT §6.2
|
||||||
|
if child_val != parent_val:
|
||||||
|
raise ACTPrivilegeEscalationError(
|
||||||
|
f"Capability {action!r}: constraint {key!r} value "
|
||||||
|
f"{child_val!r} differs from parent value {parent_val!r} "
|
||||||
|
f"(non-comparable constraints must be identical)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_delegation_chain(
|
||||||
|
mandate: ACTMandate,
|
||||||
|
resolve_key: Any,
|
||||||
|
resolve_parent_compact: Any | None = None,
|
||||||
|
) -> None:
|
||||||
|
"""Verify the delegation chain of a mandate.
|
||||||
|
|
||||||
|
Reference: ACT §6.3 (Delegation Verification).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
mandate: The ACT mandate to verify.
|
||||||
|
resolve_key: Callable(delegator_id: str) -> PublicKey to resolve
|
||||||
|
the public key of a delegator.
|
||||||
|
resolve_parent_compact: Optional callable(jti: str) -> str|None
|
||||||
|
to retrieve the parent ACT compact form.
|
||||||
|
Required for full chain sig verification.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTDelegationError: If the chain is structurally invalid.
|
||||||
|
ACTPrivilegeEscalationError: If capabilities were escalated.
|
||||||
|
"""
|
||||||
|
if mandate.delegation is None:
|
||||||
|
# No delegation — root mandate, nothing to verify
|
||||||
|
return
|
||||||
|
|
||||||
|
delegation = mandate.delegation
|
||||||
|
|
||||||
|
# Step 3: depth <= max_depth
|
||||||
|
if delegation.depth > delegation.max_depth:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Delegation depth {delegation.depth} exceeds "
|
||||||
|
f"max_depth {delegation.max_depth}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 4: chain length == depth
|
||||||
|
if len(delegation.chain) != delegation.depth:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Delegation chain length {len(delegation.chain)} does not "
|
||||||
|
f"match depth {delegation.depth}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 2: verify each chain entry
|
||||||
|
for i, entry in enumerate(delegation.chain):
|
||||||
|
# Step 2a: resolve delegator's public key
|
||||||
|
try:
|
||||||
|
pub_key = resolve_key(entry.delegator)
|
||||||
|
except Exception as e:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Cannot resolve key for delegator {entry.delegator!r} "
|
||||||
|
f"at chain index {i}: {e}"
|
||||||
|
) from e
|
||||||
|
|
||||||
|
# Step 2b: verify signature if parent compact is available
|
||||||
|
if resolve_parent_compact is not None:
|
||||||
|
parent_compact = resolve_parent_compact(entry.jti)
|
||||||
|
if parent_compact is not None:
|
||||||
|
parent_hash = compute_sha256(
|
||||||
|
parent_compact.encode("utf-8")
|
||||||
|
)
|
||||||
|
sig_bytes = _b64url_decode(entry.sig)
|
||||||
|
try:
|
||||||
|
crypto_verify(pub_key, sig_bytes, parent_hash)
|
||||||
|
except Exception as e:
|
||||||
|
raise ACTDelegationError(
|
||||||
|
f"Chain entry signature verification failed at "
|
||||||
|
f"index {i} (delegator={entry.delegator!r}): {e}"
|
||||||
|
) from e
|
||||||
131
workspace/packages/act/act/errors.py
Normal file
131
workspace/packages/act/act/errors.py
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
"""ACT-specific exception types.
|
||||||
|
|
||||||
|
All exceptions defined in this module correspond to specific failure
|
||||||
|
modes in the Agent Context Token lifecycle as defined in
|
||||||
|
draft-nennemann-act-01.
|
||||||
|
|
||||||
|
Reference: ACT §8 (Verification Procedure), §6 (Delegation Chain),
|
||||||
|
§7 (DAG Structure), §10 (Audit Ledger Interface).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
class ACTError(Exception):
|
||||||
|
"""Base exception for all ACT operations.
|
||||||
|
|
||||||
|
All ACT-specific exceptions inherit from this class, allowing
|
||||||
|
callers to catch any ACT error with a single except clause.
|
||||||
|
|
||||||
|
Reference: draft-nennemann-act-01.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTValidationError(ACTError):
|
||||||
|
"""Malformed token structure or invalid field values.
|
||||||
|
|
||||||
|
Raised when an ACT fails structural validation: missing required
|
||||||
|
claims, invalid claim types, unsupported algorithm ("none", HS*),
|
||||||
|
or invalid typ header.
|
||||||
|
|
||||||
|
Reference: ACT §4 (Token Format), §8.1 steps 2-3, 11.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTSignatureError(ACTError):
|
||||||
|
"""Signature verification failed.
|
||||||
|
|
||||||
|
Raised when a JWS signature cannot be verified against the
|
||||||
|
resolved public key, or when a Phase 2 token is signed by the
|
||||||
|
wrong key (e.g., iss key instead of sub key).
|
||||||
|
|
||||||
|
Reference: ACT §8.1 step 5, §8.2 step 17.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTExpiredError(ACTError):
|
||||||
|
"""Token has expired.
|
||||||
|
|
||||||
|
Raised when the current time exceeds exp + clock_skew_tolerance.
|
||||||
|
The default clock skew tolerance is 300 seconds.
|
||||||
|
|
||||||
|
Reference: ACT §8.1 step 6.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTAudienceMismatchError(ACTError):
|
||||||
|
"""The aud claim does not contain the verifier's identity.
|
||||||
|
|
||||||
|
Reference: ACT §8.1 step 8.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTCapabilityError(ACTError):
|
||||||
|
"""No matching capability or exec_act not in cap actions.
|
||||||
|
|
||||||
|
Raised when exec_act does not match any cap[].action value,
|
||||||
|
or when a requested action is not authorized by any capability.
|
||||||
|
|
||||||
|
Reference: ACT §8.2 step 13, §4.2.2 (cap).
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTDelegationError(ACTError):
|
||||||
|
"""Delegation chain is invalid.
|
||||||
|
|
||||||
|
Raised when delegation chain verification fails: depth > max_depth,
|
||||||
|
chain length != depth, or any chain entry signature fails.
|
||||||
|
|
||||||
|
Reference: ACT §6 (Delegation Chain), §8.1 step 12.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTDAGError(ACTError):
|
||||||
|
"""DAG validation failed.
|
||||||
|
|
||||||
|
Raised on cycle detection, missing parent jti, temporal ordering
|
||||||
|
violations, or traversal limit exceeded.
|
||||||
|
|
||||||
|
Reference: ACT §7 (DAG Structure and Causal Ordering).
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTPhaseError(ACTError):
|
||||||
|
"""Wrong phase for the requested operation.
|
||||||
|
|
||||||
|
Raised when a mandate is used where a record is expected, or
|
||||||
|
vice versa. Phase is determined by the presence of exec_act.
|
||||||
|
|
||||||
|
Reference: ACT §3 (Lifecycle), §8.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTKeyResolutionError(ACTError):
|
||||||
|
"""Cannot resolve kid to a public key.
|
||||||
|
|
||||||
|
Raised when the kid in the JOSE header cannot be resolved to a
|
||||||
|
public key via any configured trust tier (pre-shared, PKI, DID).
|
||||||
|
|
||||||
|
Reference: ACT §5 (Trust Model), §8.1 step 4.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTLedgerImmutabilityError(ACTError):
|
||||||
|
"""Attempt to modify or delete a ledger record.
|
||||||
|
|
||||||
|
The audit ledger enforces append-only semantics. Once appended,
|
||||||
|
a record cannot be modified or deleted.
|
||||||
|
|
||||||
|
Reference: ACT §10 (Audit Ledger Interface).
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ACTPrivilegeEscalationError(ACTError):
|
||||||
|
"""Delegated capability exceeds parent capability.
|
||||||
|
|
||||||
|
Raised when a delegated ACT contains actions not present in the
|
||||||
|
parent ACT's cap array, or when constraints are less restrictive
|
||||||
|
than the parent's constraints.
|
||||||
|
|
||||||
|
Reference: ACT §6.2 (Privilege Reduction Requirements).
|
||||||
|
"""
|
||||||
152
workspace/packages/act/act/ledger.py
Normal file
152
workspace/packages/act/act/ledger.py
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
"""ACT in-memory append-only audit ledger.
|
||||||
|
|
||||||
|
Provides an in-memory reference implementation of the audit ledger
|
||||||
|
interface. Enforces append-only semantics and hash-chain integrity.
|
||||||
|
|
||||||
|
Reference: ACT §10 (Audit Ledger Interface).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .errors import ACTLedgerImmutabilityError
|
||||||
|
from .token import ACTRecord
|
||||||
|
|
||||||
|
|
||||||
|
class ACTLedger:
|
||||||
|
"""In-memory append-only audit ledger for ACT execution records.
|
||||||
|
|
||||||
|
Records are stored in insertion order with monotonically increasing
|
||||||
|
sequence numbers. A hash chain provides integrity verification.
|
||||||
|
|
||||||
|
Reference: ACT §10.
|
||||||
|
|
||||||
|
This is a reference implementation suitable for testing. Production
|
||||||
|
deployments should use a persistent backend implementing the same
|
||||||
|
interface.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self._records: list[tuple[int, ACTRecord, str]] = []
|
||||||
|
# jti → index mapping for efficient lookup
|
||||||
|
self._jti_index: dict[str, int] = {}
|
||||||
|
# wid → list of indices for workflow queries
|
||||||
|
self._wid_index: dict[str | None, list[int]] = {}
|
||||||
|
self._seq_counter: int = 0
|
||||||
|
# Hash chain: each entry's hash includes the previous hash
|
||||||
|
self._chain_hashes: list[bytes] = []
|
||||||
|
|
||||||
|
def append(self, act_record: ACTRecord) -> int:
|
||||||
|
"""Append an execution record to the ledger.
|
||||||
|
|
||||||
|
Returns the sequence number assigned to the record.
|
||||||
|
|
||||||
|
Reference: ACT §10, requirement 1 (append-only), requirement 2 (ordering).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTLedgerImmutabilityError: If a record with the same jti
|
||||||
|
already exists.
|
||||||
|
"""
|
||||||
|
if act_record.jti in self._jti_index:
|
||||||
|
raise ACTLedgerImmutabilityError(
|
||||||
|
f"Record with jti {act_record.jti!r} already exists in ledger"
|
||||||
|
)
|
||||||
|
|
||||||
|
seq = self._seq_counter
|
||||||
|
self._seq_counter += 1
|
||||||
|
|
||||||
|
# Compute hash chain entry
|
||||||
|
record_hash = self._hash_record(act_record, seq)
|
||||||
|
if self._chain_hashes:
|
||||||
|
chained = hashlib.sha256(
|
||||||
|
self._chain_hashes[-1] + record_hash
|
||||||
|
).digest()
|
||||||
|
else:
|
||||||
|
chained = record_hash
|
||||||
|
self._chain_hashes.append(chained)
|
||||||
|
|
||||||
|
idx = len(self._records)
|
||||||
|
self._records.append((seq, act_record, act_record.jti))
|
||||||
|
self._jti_index[act_record.jti] = idx
|
||||||
|
|
||||||
|
wid = act_record.wid
|
||||||
|
if wid not in self._wid_index:
|
||||||
|
self._wid_index[wid] = []
|
||||||
|
self._wid_index[wid].append(idx)
|
||||||
|
|
||||||
|
return seq
|
||||||
|
|
||||||
|
def get(self, jti: str) -> ACTRecord | None:
|
||||||
|
"""Retrieve a record by jti.
|
||||||
|
|
||||||
|
Reference: ACT §10, requirement 3 (lookup).
|
||||||
|
"""
|
||||||
|
idx = self._jti_index.get(jti)
|
||||||
|
if idx is None:
|
||||||
|
return None
|
||||||
|
return self._records[idx][1]
|
||||||
|
|
||||||
|
def list(self, wid: str | None = None) -> list[ACTRecord]:
|
||||||
|
"""List records, optionally filtered by workflow id.
|
||||||
|
|
||||||
|
If wid is None, returns all records. If wid is a string,
|
||||||
|
returns only records with that wid value.
|
||||||
|
|
||||||
|
Reference: ACT §10.
|
||||||
|
"""
|
||||||
|
if wid is None:
|
||||||
|
return [r[1] for r in self._records]
|
||||||
|
|
||||||
|
indices = self._wid_index.get(wid, [])
|
||||||
|
return [self._records[i][1] for i in indices]
|
||||||
|
|
||||||
|
def verify_integrity(self) -> bool:
|
||||||
|
"""Verify the hash chain integrity of the ledger.
|
||||||
|
|
||||||
|
Recomputes the hash chain from scratch and compares against
|
||||||
|
stored chain hashes. Returns True if all hashes match.
|
||||||
|
|
||||||
|
Reference: ACT §10, requirement 4 (integrity).
|
||||||
|
"""
|
||||||
|
if not self._records:
|
||||||
|
return True
|
||||||
|
|
||||||
|
prev_hash: bytes | None = None
|
||||||
|
for i, (seq, record, _jti) in enumerate(self._records):
|
||||||
|
record_hash = self._hash_record(record, seq)
|
||||||
|
if prev_hash is not None:
|
||||||
|
expected = hashlib.sha256(prev_hash + record_hash).digest()
|
||||||
|
else:
|
||||||
|
expected = record_hash
|
||||||
|
|
||||||
|
if i >= len(self._chain_hashes):
|
||||||
|
return False
|
||||||
|
if self._chain_hashes[i] != expected:
|
||||||
|
return False
|
||||||
|
prev_hash = expected
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self._records)
|
||||||
|
|
||||||
|
def _hash_record(self, record: ACTRecord, seq: int) -> bytes:
|
||||||
|
"""Compute a deterministic hash of a record for chain integrity."""
|
||||||
|
claims = record.to_claims()
|
||||||
|
# Include sequence number in hash for ordering integrity
|
||||||
|
claims["_seq"] = seq
|
||||||
|
canonical = json.dumps(claims, sort_keys=True, separators=(",", ":"))
|
||||||
|
return hashlib.sha256(canonical.encode("utf-8")).digest()
|
||||||
|
|
||||||
|
def _immutable_guard(self) -> None:
|
||||||
|
"""Internal method — not callable externally.
|
||||||
|
|
||||||
|
The ledger has no update/delete methods by design.
|
||||||
|
This exists to make the intent explicit.
|
||||||
|
"""
|
||||||
|
raise ACTLedgerImmutabilityError(
|
||||||
|
"Ledger records cannot be modified or deleted"
|
||||||
|
)
|
||||||
96
workspace/packages/act/act/lifecycle.py
Normal file
96
workspace/packages/act/act/lifecycle.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
"""ACT Phase 1 to Phase 2 transition logic.
|
||||||
|
|
||||||
|
Handles the transition from Authorization Mandate to Execution Record,
|
||||||
|
including re-signing by the executing agent (sub).
|
||||||
|
|
||||||
|
Reference: ACT §3.2, §3.3 (Lifecycle State Machine).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import time
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .crypto import PrivateKey, sign as crypto_sign
|
||||||
|
from .errors import ACTCapabilityError, ACTPhaseError, ACTValidationError
|
||||||
|
from .token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
ErrorClaim,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def transition_to_record(
|
||||||
|
mandate: ACTMandate,
|
||||||
|
*,
|
||||||
|
sub_kid: str,
|
||||||
|
sub_private_key: PrivateKey,
|
||||||
|
exec_act: str,
|
||||||
|
pred: list[str] | None = None,
|
||||||
|
exec_ts: int | None = None,
|
||||||
|
status: str = "completed",
|
||||||
|
inp_hash: str | None = None,
|
||||||
|
out_hash: str | None = None,
|
||||||
|
err: ErrorClaim | None = None,
|
||||||
|
) -> tuple[ACTRecord, str]:
|
||||||
|
"""Transition a Phase 1 mandate to a Phase 2 execution record.
|
||||||
|
|
||||||
|
The executing agent (sub) adds execution claims and re-signs the
|
||||||
|
complete token with its own private key. The kid in the Phase 2
|
||||||
|
JOSE header MUST reference sub's key, not iss's key.
|
||||||
|
|
||||||
|
All Phase 1 claims are preserved unchanged in the Phase 2 token.
|
||||||
|
|
||||||
|
Reference: ACT §3.2, §8.2 step 17.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
mandate: The Phase 1 ACTMandate to transition.
|
||||||
|
sub_kid: The kid for the sub agent's signing key.
|
||||||
|
sub_private_key: The sub agent's private key for re-signing.
|
||||||
|
exec_act: The action actually performed (must match a cap[].action).
|
||||||
|
pred: Predecessor task jti values (DAG dependencies). Empty list for root tasks.
|
||||||
|
exec_ts: Execution timestamp (defaults to current time).
|
||||||
|
status: Execution status: "completed", "failed", or "partial".
|
||||||
|
inp_hash: Base64url SHA-256 hash of input data (optional).
|
||||||
|
out_hash: Base64url SHA-256 hash of output data (optional).
|
||||||
|
err: Error details when status is "failed" or "partial".
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (ACTRecord, JWS compact serialization string).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTPhaseError: If the mandate is already a Phase 2 token.
|
||||||
|
ACTCapabilityError: If exec_act does not match any cap[].action.
|
||||||
|
ACTValidationError: If the resulting record fails validation.
|
||||||
|
"""
|
||||||
|
if mandate.is_phase2():
|
||||||
|
raise ACTPhaseError("Cannot transition: token is already Phase 2")
|
||||||
|
|
||||||
|
# Verify exec_act matches a capability
|
||||||
|
cap_actions = {c.action for c in mandate.cap}
|
||||||
|
if exec_act not in cap_actions:
|
||||||
|
raise ACTCapabilityError(
|
||||||
|
f"exec_act {exec_act!r} does not match any cap[].action: "
|
||||||
|
f"{sorted(cap_actions)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
record = ACTRecord.from_mandate(
|
||||||
|
mandate,
|
||||||
|
kid=sub_kid,
|
||||||
|
exec_act=exec_act,
|
||||||
|
pred=pred if pred is not None else [],
|
||||||
|
exec_ts=exec_ts if exec_ts is not None else int(time.time()),
|
||||||
|
status=status,
|
||||||
|
inp_hash=inp_hash,
|
||||||
|
out_hash=out_hash,
|
||||||
|
err=err,
|
||||||
|
)
|
||||||
|
|
||||||
|
record.validate()
|
||||||
|
|
||||||
|
# Re-sign with sub's private key
|
||||||
|
signature = crypto_sign(sub_private_key, record.signing_input())
|
||||||
|
compact = encode_jws(record, signature)
|
||||||
|
|
||||||
|
return record, compact
|
||||||
734
workspace/packages/act/act/token.py
Normal file
734
workspace/packages/act/act/token.py
Normal file
@@ -0,0 +1,734 @@
|
|||||||
|
"""ACT token structures and JWS Compact Serialization.
|
||||||
|
|
||||||
|
Defines ACTMandate (Phase 1) and ACTRecord (Phase 2) dataclasses,
|
||||||
|
plus JWS encoding/decoding primitives for ACT tokens.
|
||||||
|
|
||||||
|
Reference: ACT §3 (Lifecycle), §4 (Token Format).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .errors import ACTPhaseError, ACTValidationError
|
||||||
|
|
||||||
|
# Allowed algorithms per ACT §4.1 — symmetric and "none" are forbidden.
|
||||||
|
ALLOWED_ALGORITHMS: frozenset[str] = frozenset({"EdDSA", "ES256"})
|
||||||
|
|
||||||
|
# Forbidden algorithm prefixes/values per ACT §4.1.
|
||||||
|
_FORBIDDEN_ALGORITHMS: frozenset[str] = frozenset({
|
||||||
|
"none", "HS256", "HS384", "HS512",
|
||||||
|
})
|
||||||
|
|
||||||
|
# Required typ value per ACT §4.1.
|
||||||
|
ACT_TYP: str = "act+jwt"
|
||||||
|
|
||||||
|
# ABNF for action names: component *("." component)
|
||||||
|
# component = ALPHA *(ALPHA / DIGIT / "-" / "_")
|
||||||
|
_ACTION_RE = re.compile(
|
||||||
|
r"^[A-Za-z][A-Za-z0-9\-_]*(?:\.[A-Za-z][A-Za-z0-9\-_]*)*$"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _b64url_encode(data: bytes) -> str:
|
||||||
|
"""Base64url encode without padding (RFC 7515 §2)."""
|
||||||
|
return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
|
||||||
|
|
||||||
|
|
||||||
|
def _b64url_decode(s: str) -> bytes:
|
||||||
|
"""Base64url decode with padding restoration."""
|
||||||
|
s = s + "=" * (-len(s) % 4)
|
||||||
|
return base64.urlsafe_b64decode(s)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_action_name(action: str) -> None:
|
||||||
|
"""Validate an action name against ACT ABNF grammar.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2 (cap action names).
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If action does not match the required grammar.
|
||||||
|
"""
|
||||||
|
if not _ACTION_RE.match(action):
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Action name {action!r} does not conform to ACT ABNF grammar"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class TaskClaim:
|
||||||
|
"""The 'task' claim object.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2.
|
||||||
|
"""
|
||||||
|
|
||||||
|
purpose: str
|
||||||
|
data_sensitivity: str | None = None
|
||||||
|
created_by: str | None = None
|
||||||
|
expires_at: int | None = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, Any]:
|
||||||
|
d: dict[str, Any] = {"purpose": self.purpose}
|
||||||
|
if self.data_sensitivity is not None:
|
||||||
|
d["data_sensitivity"] = self.data_sensitivity
|
||||||
|
if self.created_by is not None:
|
||||||
|
d["created_by"] = self.created_by
|
||||||
|
if self.expires_at is not None:
|
||||||
|
d["expires_at"] = self.expires_at
|
||||||
|
return d
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> TaskClaim:
|
||||||
|
if "purpose" not in d:
|
||||||
|
raise ACTValidationError("task.purpose is required")
|
||||||
|
return cls(
|
||||||
|
purpose=d["purpose"],
|
||||||
|
data_sensitivity=d.get("data_sensitivity"),
|
||||||
|
created_by=d.get("created_by"),
|
||||||
|
expires_at=d.get("expires_at"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class Capability:
|
||||||
|
"""A single capability entry in the 'cap' array.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2.
|
||||||
|
"""
|
||||||
|
|
||||||
|
action: str
|
||||||
|
constraints: dict[str, Any] | None = None
|
||||||
|
|
||||||
|
def __post_init__(self) -> None:
|
||||||
|
validate_action_name(self.action)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, Any]:
|
||||||
|
d: dict[str, Any] = {"action": self.action}
|
||||||
|
if self.constraints is not None:
|
||||||
|
d["constraints"] = self.constraints
|
||||||
|
return d
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> Capability:
|
||||||
|
if "action" not in d:
|
||||||
|
raise ACTValidationError("cap[].action is required")
|
||||||
|
return cls(
|
||||||
|
action=d["action"],
|
||||||
|
constraints=d.get("constraints"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class DelegationEntry:
|
||||||
|
"""A single entry in del.chain.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2 (del), §6 (Delegation Chain).
|
||||||
|
"""
|
||||||
|
|
||||||
|
delegator: str
|
||||||
|
jti: str
|
||||||
|
sig: str
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, str]:
|
||||||
|
return {"delegator": self.delegator, "jti": self.jti, "sig": self.sig}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> DelegationEntry:
|
||||||
|
for key in ("delegator", "jti", "sig"):
|
||||||
|
if key not in d:
|
||||||
|
raise ACTValidationError(f"del.chain[].{key} is required")
|
||||||
|
return cls(
|
||||||
|
delegator=d["delegator"], jti=d["jti"], sig=d["sig"]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class Delegation:
|
||||||
|
"""The 'del' claim object.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2 (del), §6 (Delegation Chain).
|
||||||
|
"""
|
||||||
|
|
||||||
|
depth: int
|
||||||
|
max_depth: int
|
||||||
|
chain: list[DelegationEntry] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"depth": self.depth,
|
||||||
|
"max_depth": self.max_depth,
|
||||||
|
"chain": [e.to_dict() for e in self.chain],
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> Delegation:
|
||||||
|
for key in ("depth", "max_depth"):
|
||||||
|
if key not in d:
|
||||||
|
raise ACTValidationError(f"del.{key} is required")
|
||||||
|
chain_raw = d.get("chain", [])
|
||||||
|
chain = [DelegationEntry.from_dict(e) for e in chain_raw]
|
||||||
|
return cls(depth=d["depth"], max_depth=d["max_depth"], chain=chain)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class Oversight:
|
||||||
|
"""The 'oversight' claim object.
|
||||||
|
|
||||||
|
Reference: ACT §4.2.2 (oversight).
|
||||||
|
"""
|
||||||
|
|
||||||
|
requires_approval_for: list[str] = field(default_factory=list)
|
||||||
|
approval_ref: str | None = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, Any]:
|
||||||
|
d: dict[str, Any] = {
|
||||||
|
"requires_approval_for": self.requires_approval_for
|
||||||
|
}
|
||||||
|
if self.approval_ref is not None:
|
||||||
|
d["approval_ref"] = self.approval_ref
|
||||||
|
return d
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> Oversight:
|
||||||
|
return cls(
|
||||||
|
requires_approval_for=d.get("requires_approval_for", []),
|
||||||
|
approval_ref=d.get("approval_ref"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class ErrorClaim:
|
||||||
|
"""The 'err' claim object for failed/partial execution.
|
||||||
|
|
||||||
|
Reference: ACT §4.3.
|
||||||
|
"""
|
||||||
|
|
||||||
|
code: str
|
||||||
|
detail: str
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, str]:
|
||||||
|
return {"code": self.code, "detail": self.detail}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, d: dict[str, Any]) -> ErrorClaim:
|
||||||
|
for key in ("code", "detail"):
|
||||||
|
if key not in d:
|
||||||
|
raise ACTValidationError(f"err.{key} is required")
|
||||||
|
return cls(code=d["code"], detail=d["detail"])
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ACTMandate:
|
||||||
|
"""Phase 1 Authorization Mandate.
|
||||||
|
|
||||||
|
Represents a signed authorization from an issuing agent to a target
|
||||||
|
agent, encoding capabilities, constraints, and delegation provenance.
|
||||||
|
|
||||||
|
Reference: ACT §3.1, §4.1, §4.2.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# JOSE header fields
|
||||||
|
alg: str
|
||||||
|
kid: str
|
||||||
|
x5c: list[str] | None = None
|
||||||
|
did: str | None = None
|
||||||
|
|
||||||
|
# Required JWT claims
|
||||||
|
iss: str = ""
|
||||||
|
sub: str = ""
|
||||||
|
aud: str | list[str] = ""
|
||||||
|
iat: int = 0
|
||||||
|
exp: int = 0
|
||||||
|
jti: str = field(default_factory=lambda: str(uuid.uuid4()))
|
||||||
|
|
||||||
|
# Optional standard claims
|
||||||
|
wid: str | None = None
|
||||||
|
|
||||||
|
# Required ACT claims
|
||||||
|
task: TaskClaim = field(default_factory=lambda: TaskClaim(purpose=""))
|
||||||
|
cap: list[Capability] = field(default_factory=list)
|
||||||
|
|
||||||
|
# Optional ACT claims
|
||||||
|
delegation: Delegation | None = None
|
||||||
|
oversight: Oversight | None = None
|
||||||
|
|
||||||
|
def validate(self) -> None:
|
||||||
|
"""Validate structural correctness of this mandate.
|
||||||
|
|
||||||
|
Reference: ACT §4.1, §4.2, §8.1 step 11.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If any required field is missing or invalid.
|
||||||
|
"""
|
||||||
|
_validate_algorithm(self.alg)
|
||||||
|
if not self.kid:
|
||||||
|
raise ACTValidationError("kid is required in JOSE header")
|
||||||
|
for claim_name in ("iss", "sub", "aud", "jti"):
|
||||||
|
val = getattr(self, claim_name)
|
||||||
|
if not val:
|
||||||
|
raise ACTValidationError(f"{claim_name} claim is required")
|
||||||
|
if self.iat <= 0:
|
||||||
|
raise ACTValidationError("iat must be a positive NumericDate")
|
||||||
|
if self.exp <= 0:
|
||||||
|
raise ACTValidationError("exp must be a positive NumericDate")
|
||||||
|
if not self.task.purpose:
|
||||||
|
raise ACTValidationError("task.purpose is required")
|
||||||
|
if not self.cap:
|
||||||
|
raise ACTValidationError("cap must contain at least one capability")
|
||||||
|
|
||||||
|
def to_header(self) -> dict[str, Any]:
|
||||||
|
"""Build JOSE header dict.
|
||||||
|
|
||||||
|
Reference: ACT §4.1.
|
||||||
|
"""
|
||||||
|
h: dict[str, Any] = {
|
||||||
|
"alg": self.alg,
|
||||||
|
"typ": ACT_TYP,
|
||||||
|
"kid": self.kid,
|
||||||
|
}
|
||||||
|
if self.x5c is not None:
|
||||||
|
h["x5c"] = self.x5c
|
||||||
|
if self.did is not None:
|
||||||
|
h["did"] = self.did
|
||||||
|
return h
|
||||||
|
|
||||||
|
def to_claims(self) -> dict[str, Any]:
|
||||||
|
"""Build JWT claims dict (Phase 1 claims only).
|
||||||
|
|
||||||
|
Reference: ACT §4.2.
|
||||||
|
"""
|
||||||
|
c: dict[str, Any] = {
|
||||||
|
"iss": self.iss,
|
||||||
|
"sub": self.sub,
|
||||||
|
"aud": self.aud,
|
||||||
|
"iat": self.iat,
|
||||||
|
"exp": self.exp,
|
||||||
|
"jti": self.jti,
|
||||||
|
"task": self.task.to_dict(),
|
||||||
|
"cap": [cap.to_dict() for cap in self.cap],
|
||||||
|
}
|
||||||
|
if self.wid is not None:
|
||||||
|
c["wid"] = self.wid
|
||||||
|
if self.delegation is not None:
|
||||||
|
c["del"] = self.delegation.to_dict()
|
||||||
|
if self.oversight is not None:
|
||||||
|
c["oversight"] = self.oversight.to_dict()
|
||||||
|
return c
|
||||||
|
|
||||||
|
def signing_input(self) -> bytes:
|
||||||
|
"""Compute the JWS signing input (header.payload) as bytes.
|
||||||
|
|
||||||
|
Reference: RFC 7515 §5.1.
|
||||||
|
"""
|
||||||
|
header_b64 = _b64url_encode(
|
||||||
|
json.dumps(self.to_header(), separators=(",", ":")).encode()
|
||||||
|
)
|
||||||
|
payload_b64 = _b64url_encode(
|
||||||
|
json.dumps(self.to_claims(), separators=(",", ":")).encode()
|
||||||
|
)
|
||||||
|
return f"{header_b64}.{payload_b64}".encode("ascii")
|
||||||
|
|
||||||
|
def is_phase2(self) -> bool:
|
||||||
|
"""Return False; mandates are always Phase 1."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_claims(
|
||||||
|
cls,
|
||||||
|
header: dict[str, Any],
|
||||||
|
claims: dict[str, Any],
|
||||||
|
) -> ACTMandate:
|
||||||
|
"""Construct an ACTMandate from parsed header and claims dicts.
|
||||||
|
|
||||||
|
Reference: ACT §4.1, §4.2.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If required fields are missing.
|
||||||
|
ACTPhaseError: If exec_act is present (this is a Phase 2 token).
|
||||||
|
"""
|
||||||
|
if "exec_act" in claims:
|
||||||
|
raise ACTPhaseError(
|
||||||
|
"Token contains exec_act; use ACTRecord.from_claims instead"
|
||||||
|
)
|
||||||
|
|
||||||
|
del_raw = claims.get("del")
|
||||||
|
delegation = Delegation.from_dict(del_raw) if del_raw else None
|
||||||
|
|
||||||
|
oversight_raw = claims.get("oversight")
|
||||||
|
oversight_obj = Oversight.from_dict(oversight_raw) if oversight_raw else None
|
||||||
|
|
||||||
|
task_raw = claims.get("task")
|
||||||
|
if task_raw is None:
|
||||||
|
raise ACTValidationError("task claim is required")
|
||||||
|
|
||||||
|
cap_raw = claims.get("cap")
|
||||||
|
if cap_raw is None:
|
||||||
|
raise ACTValidationError("cap claim is required")
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
alg=header.get("alg", ""),
|
||||||
|
kid=header.get("kid", ""),
|
||||||
|
x5c=header.get("x5c"),
|
||||||
|
did=header.get("did"),
|
||||||
|
iss=claims.get("iss", ""),
|
||||||
|
sub=claims.get("sub", ""),
|
||||||
|
aud=claims.get("aud", ""),
|
||||||
|
iat=claims.get("iat", 0),
|
||||||
|
exp=claims.get("exp", 0),
|
||||||
|
jti=claims.get("jti", ""),
|
||||||
|
wid=claims.get("wid"),
|
||||||
|
task=TaskClaim.from_dict(task_raw),
|
||||||
|
cap=[Capability.from_dict(c) for c in cap_raw],
|
||||||
|
delegation=delegation,
|
||||||
|
oversight=oversight_obj,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ACTRecord:
|
||||||
|
"""Phase 2 Execution Record.
|
||||||
|
|
||||||
|
Contains all Phase 1 claims preserved unchanged, plus execution
|
||||||
|
claims added by the executing agent. Re-signed by sub's key.
|
||||||
|
|
||||||
|
Reference: ACT §3.2, §4.3.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# JOSE header fields (Phase 2 header uses sub's kid)
|
||||||
|
alg: str
|
||||||
|
kid: str
|
||||||
|
x5c: list[str] | None = None
|
||||||
|
did: str | None = None
|
||||||
|
|
||||||
|
# Phase 1 claims (preserved)
|
||||||
|
iss: str = ""
|
||||||
|
sub: str = ""
|
||||||
|
aud: str | list[str] = ""
|
||||||
|
iat: int = 0
|
||||||
|
exp: int = 0
|
||||||
|
jti: str = ""
|
||||||
|
wid: str | None = None
|
||||||
|
task: TaskClaim = field(default_factory=lambda: TaskClaim(purpose=""))
|
||||||
|
cap: list[Capability] = field(default_factory=list)
|
||||||
|
delegation: Delegation | None = None
|
||||||
|
oversight: Oversight | None = None
|
||||||
|
|
||||||
|
# Phase 2 claims (execution)
|
||||||
|
exec_act: str = ""
|
||||||
|
pred: list[str] = field(default_factory=list)
|
||||||
|
exec_ts: int = 0
|
||||||
|
status: str = ""
|
||||||
|
inp_hash: str | None = None
|
||||||
|
out_hash: str | None = None
|
||||||
|
err: ErrorClaim | None = None
|
||||||
|
|
||||||
|
def validate(self) -> None:
|
||||||
|
"""Validate structural correctness of this record.
|
||||||
|
|
||||||
|
Reference: ACT §4.3, §8.2 steps 13-16.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If any required field is missing or invalid.
|
||||||
|
"""
|
||||||
|
_validate_algorithm(self.alg)
|
||||||
|
if not self.kid:
|
||||||
|
raise ACTValidationError("kid is required in JOSE header")
|
||||||
|
for claim_name in ("iss", "sub", "aud", "jti"):
|
||||||
|
val = getattr(self, claim_name)
|
||||||
|
if not val:
|
||||||
|
raise ACTValidationError(f"{claim_name} claim is required")
|
||||||
|
if self.iat <= 0:
|
||||||
|
raise ACTValidationError("iat must be a positive NumericDate")
|
||||||
|
if self.exp <= 0:
|
||||||
|
raise ACTValidationError("exp must be a positive NumericDate")
|
||||||
|
if not self.task.purpose:
|
||||||
|
raise ACTValidationError("task.purpose is required")
|
||||||
|
if not self.cap:
|
||||||
|
raise ACTValidationError("cap must contain at least one capability")
|
||||||
|
if not self.exec_act:
|
||||||
|
raise ACTValidationError("exec_act is required in Phase 2")
|
||||||
|
validate_action_name(self.exec_act)
|
||||||
|
if self.exec_ts <= 0:
|
||||||
|
raise ACTValidationError("exec_ts must be a positive NumericDate")
|
||||||
|
if self.status not in ("completed", "failed", "partial"):
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"status must be one of completed/failed/partial, got {self.status!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_header(self) -> dict[str, Any]:
|
||||||
|
"""Build JOSE header dict for Phase 2.
|
||||||
|
|
||||||
|
In Phase 2, kid MUST reference the sub agent's key.
|
||||||
|
Reference: ACT §4.1, §8.2 step 17.
|
||||||
|
"""
|
||||||
|
h: dict[str, Any] = {
|
||||||
|
"alg": self.alg,
|
||||||
|
"typ": ACT_TYP,
|
||||||
|
"kid": self.kid,
|
||||||
|
}
|
||||||
|
if self.x5c is not None:
|
||||||
|
h["x5c"] = self.x5c
|
||||||
|
if self.did is not None:
|
||||||
|
h["did"] = self.did
|
||||||
|
return h
|
||||||
|
|
||||||
|
def to_claims(self) -> dict[str, Any]:
|
||||||
|
"""Build JWT claims dict (Phase 1 + Phase 2 claims).
|
||||||
|
|
||||||
|
Reference: ACT §4.2, §4.3.
|
||||||
|
"""
|
||||||
|
c: dict[str, Any] = {
|
||||||
|
"iss": self.iss,
|
||||||
|
"sub": self.sub,
|
||||||
|
"aud": self.aud,
|
||||||
|
"iat": self.iat,
|
||||||
|
"exp": self.exp,
|
||||||
|
"jti": self.jti,
|
||||||
|
"task": self.task.to_dict(),
|
||||||
|
"cap": [cap.to_dict() for cap in self.cap],
|
||||||
|
"exec_act": self.exec_act,
|
||||||
|
"pred": self.pred,
|
||||||
|
"exec_ts": self.exec_ts,
|
||||||
|
"status": self.status,
|
||||||
|
}
|
||||||
|
if self.wid is not None:
|
||||||
|
c["wid"] = self.wid
|
||||||
|
if self.delegation is not None:
|
||||||
|
c["del"] = self.delegation.to_dict()
|
||||||
|
if self.oversight is not None:
|
||||||
|
c["oversight"] = self.oversight.to_dict()
|
||||||
|
if self.inp_hash is not None:
|
||||||
|
c["inp_hash"] = self.inp_hash
|
||||||
|
if self.out_hash is not None:
|
||||||
|
c["out_hash"] = self.out_hash
|
||||||
|
if self.err is not None:
|
||||||
|
c["err"] = self.err.to_dict()
|
||||||
|
return c
|
||||||
|
|
||||||
|
def signing_input(self) -> bytes:
|
||||||
|
"""Compute the JWS signing input (header.payload) as bytes.
|
||||||
|
|
||||||
|
Reference: RFC 7515 §5.1.
|
||||||
|
"""
|
||||||
|
header_b64 = _b64url_encode(
|
||||||
|
json.dumps(self.to_header(), separators=(",", ":")).encode()
|
||||||
|
)
|
||||||
|
payload_b64 = _b64url_encode(
|
||||||
|
json.dumps(self.to_claims(), separators=(",", ":")).encode()
|
||||||
|
)
|
||||||
|
return f"{header_b64}.{payload_b64}".encode("ascii")
|
||||||
|
|
||||||
|
def is_phase2(self) -> bool:
|
||||||
|
"""Return True; records are always Phase 2."""
|
||||||
|
return True
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_mandate(
|
||||||
|
cls,
|
||||||
|
mandate: ACTMandate,
|
||||||
|
*,
|
||||||
|
kid: str,
|
||||||
|
exec_act: str,
|
||||||
|
pred: list[str] | None = None,
|
||||||
|
exec_ts: int | None = None,
|
||||||
|
status: str = "completed",
|
||||||
|
inp_hash: str | None = None,
|
||||||
|
out_hash: str | None = None,
|
||||||
|
err: ErrorClaim | None = None,
|
||||||
|
) -> ACTRecord:
|
||||||
|
"""Create an ACTRecord by transitioning a mandate to Phase 2.
|
||||||
|
|
||||||
|
The kid MUST be the sub agent's key identifier.
|
||||||
|
|
||||||
|
Reference: ACT §3.2, §4.3.
|
||||||
|
"""
|
||||||
|
return cls(
|
||||||
|
alg=mandate.alg,
|
||||||
|
kid=kid,
|
||||||
|
x5c=mandate.x5c,
|
||||||
|
did=mandate.did,
|
||||||
|
iss=mandate.iss,
|
||||||
|
sub=mandate.sub,
|
||||||
|
aud=mandate.aud,
|
||||||
|
iat=mandate.iat,
|
||||||
|
exp=mandate.exp,
|
||||||
|
jti=mandate.jti,
|
||||||
|
wid=mandate.wid,
|
||||||
|
task=mandate.task,
|
||||||
|
cap=mandate.cap,
|
||||||
|
delegation=mandate.delegation,
|
||||||
|
oversight=mandate.oversight,
|
||||||
|
exec_act=exec_act,
|
||||||
|
pred=pred if pred is not None else [],
|
||||||
|
exec_ts=exec_ts if exec_ts is not None else int(time.time()),
|
||||||
|
status=status,
|
||||||
|
inp_hash=inp_hash,
|
||||||
|
out_hash=out_hash,
|
||||||
|
err=err,
|
||||||
|
)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_claims(
|
||||||
|
cls,
|
||||||
|
header: dict[str, Any],
|
||||||
|
claims: dict[str, Any],
|
||||||
|
) -> ACTRecord:
|
||||||
|
"""Construct an ACTRecord from parsed header and claims dicts.
|
||||||
|
|
||||||
|
Reference: ACT §4.1, §4.2, §4.3.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If required fields are missing.
|
||||||
|
ACTPhaseError: If exec_act is absent (this is a Phase 1 token).
|
||||||
|
"""
|
||||||
|
if "exec_act" not in claims:
|
||||||
|
raise ACTPhaseError(
|
||||||
|
"Token does not contain exec_act; use ACTMandate.from_claims instead"
|
||||||
|
)
|
||||||
|
|
||||||
|
del_raw = claims.get("del")
|
||||||
|
delegation = Delegation.from_dict(del_raw) if del_raw else None
|
||||||
|
|
||||||
|
oversight_raw = claims.get("oversight")
|
||||||
|
oversight_obj = Oversight.from_dict(oversight_raw) if oversight_raw else None
|
||||||
|
|
||||||
|
task_raw = claims.get("task")
|
||||||
|
if task_raw is None:
|
||||||
|
raise ACTValidationError("task claim is required")
|
||||||
|
|
||||||
|
cap_raw = claims.get("cap")
|
||||||
|
if cap_raw is None:
|
||||||
|
raise ACTValidationError("cap claim is required")
|
||||||
|
|
||||||
|
err_raw = claims.get("err")
|
||||||
|
err_obj = ErrorClaim.from_dict(err_raw) if err_raw else None
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
alg=header.get("alg", ""),
|
||||||
|
kid=header.get("kid", ""),
|
||||||
|
x5c=header.get("x5c"),
|
||||||
|
did=header.get("did"),
|
||||||
|
iss=claims.get("iss", ""),
|
||||||
|
sub=claims.get("sub", ""),
|
||||||
|
aud=claims.get("aud", ""),
|
||||||
|
iat=claims.get("iat", 0),
|
||||||
|
exp=claims.get("exp", 0),
|
||||||
|
jti=claims.get("jti", ""),
|
||||||
|
wid=claims.get("wid"),
|
||||||
|
task=TaskClaim.from_dict(task_raw),
|
||||||
|
cap=[Capability.from_dict(c) for c in cap_raw],
|
||||||
|
delegation=delegation,
|
||||||
|
oversight=oversight_obj,
|
||||||
|
exec_act=claims["exec_act"],
|
||||||
|
pred=claims.get("pred", []),
|
||||||
|
exec_ts=claims.get("exec_ts", 0),
|
||||||
|
status=claims.get("status", ""),
|
||||||
|
inp_hash=claims.get("inp_hash"),
|
||||||
|
out_hash=claims.get("out_hash"),
|
||||||
|
err=err_obj,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# --- JWS Compact Serialization ---
|
||||||
|
|
||||||
|
|
||||||
|
def encode_jws(
|
||||||
|
token: ACTMandate | ACTRecord,
|
||||||
|
signature: bytes,
|
||||||
|
) -> str:
|
||||||
|
"""Encode a token and signature as JWS Compact Serialization.
|
||||||
|
|
||||||
|
Returns header.payload.signature (three base64url segments).
|
||||||
|
|
||||||
|
Reference: RFC 7515 §3.1, ACT §4.
|
||||||
|
"""
|
||||||
|
signing_input = token.signing_input().decode("ascii")
|
||||||
|
sig_b64 = _b64url_encode(signature)
|
||||||
|
return f"{signing_input}.{sig_b64}"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_jws(compact: str) -> tuple[dict[str, Any], dict[str, Any], bytes, bytes]:
|
||||||
|
"""Decode a JWS Compact Serialization string.
|
||||||
|
|
||||||
|
Returns (header_dict, claims_dict, signature_bytes, signing_input_bytes).
|
||||||
|
|
||||||
|
Reference: RFC 7515 §5.2, ACT §4.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If the token is malformed.
|
||||||
|
"""
|
||||||
|
parts = compact.split(".")
|
||||||
|
if len(parts) != 3:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"JWS Compact Serialization requires 3 parts, got {len(parts)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
header = json.loads(_b64url_decode(parts[0]))
|
||||||
|
except (json.JSONDecodeError, Exception) as e:
|
||||||
|
raise ACTValidationError(f"Invalid JOSE header: {e}") from e
|
||||||
|
|
||||||
|
try:
|
||||||
|
claims = json.loads(_b64url_decode(parts[1]))
|
||||||
|
except (json.JSONDecodeError, Exception) as e:
|
||||||
|
raise ACTValidationError(f"Invalid JWT claims: {e}") from e
|
||||||
|
|
||||||
|
try:
|
||||||
|
signature = _b64url_decode(parts[2])
|
||||||
|
except Exception as e:
|
||||||
|
raise ACTValidationError(f"Invalid signature encoding: {e}") from e
|
||||||
|
|
||||||
|
signing_input = f"{parts[0]}.{parts[1]}".encode("ascii")
|
||||||
|
|
||||||
|
# Validate header requirements per ACT §4.1
|
||||||
|
typ = header.get("typ")
|
||||||
|
if typ != ACT_TYP:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"typ must be {ACT_TYP!r}, got {typ!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
alg = header.get("alg", "")
|
||||||
|
_validate_algorithm(alg)
|
||||||
|
|
||||||
|
if "kid" not in header:
|
||||||
|
raise ACTValidationError("kid is required in JOSE header")
|
||||||
|
|
||||||
|
return header, claims, signature, signing_input
|
||||||
|
|
||||||
|
|
||||||
|
def parse_token(compact: str) -> ACTMandate | ACTRecord:
|
||||||
|
"""Parse a JWS compact string into an ACTMandate or ACTRecord.
|
||||||
|
|
||||||
|
Determines phase by presence of exec_act claim.
|
||||||
|
|
||||||
|
Reference: ACT §3 (phase determination).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ACTMandate for Phase 1, ACTRecord for Phase 2.
|
||||||
|
"""
|
||||||
|
header, claims, _, _ = decode_jws(compact)
|
||||||
|
if "exec_act" in claims:
|
||||||
|
return ACTRecord.from_claims(header, claims)
|
||||||
|
return ACTMandate.from_claims(header, claims)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_algorithm(alg: str) -> None:
|
||||||
|
"""Check algorithm is allowed per ACT §4.1.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: If algorithm is forbidden or unsupported.
|
||||||
|
"""
|
||||||
|
if alg in _FORBIDDEN_ALGORITHMS or alg.upper() in _FORBIDDEN_ALGORITHMS:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Algorithm {alg!r} is forbidden by ACT specification"
|
||||||
|
)
|
||||||
|
if alg not in ALLOWED_ALGORITHMS:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Unsupported algorithm {alg!r}; allowed: {sorted(ALLOWED_ALGORITHMS)}"
|
||||||
|
)
|
||||||
639
workspace/packages/act/act/vectors.py
Normal file
639
workspace/packages/act/act/vectors.py
Normal file
@@ -0,0 +1,639 @@
|
|||||||
|
"""ACT Appendix B test vectors.
|
||||||
|
|
||||||
|
Generates and validates all 15 test vectors from Appendix B of
|
||||||
|
draft-nennemann-act-01. Each vector includes description, input
|
||||||
|
parameters, and expected output or exception.
|
||||||
|
|
||||||
|
Reference: ACT Appendix B (Test Vectors).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .crypto import (
|
||||||
|
ACTKeyResolver,
|
||||||
|
KeyRegistry,
|
||||||
|
PrivateKey,
|
||||||
|
PublicKey,
|
||||||
|
b64url_sha256,
|
||||||
|
compute_sha256,
|
||||||
|
generate_ed25519_keypair,
|
||||||
|
sign as crypto_sign,
|
||||||
|
verify as crypto_verify,
|
||||||
|
)
|
||||||
|
from .dag import validate_dag
|
||||||
|
from .delegation import create_delegated_mandate, verify_capability_subset
|
||||||
|
from .errors import (
|
||||||
|
ACTAudienceMismatchError,
|
||||||
|
ACTCapabilityError,
|
||||||
|
ACTDAGError,
|
||||||
|
ACTDelegationError,
|
||||||
|
ACTExpiredError,
|
||||||
|
ACTPrivilegeEscalationError,
|
||||||
|
ACTSignatureError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
from .ledger import ACTLedger
|
||||||
|
from .lifecycle import transition_to_record
|
||||||
|
from .token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
DelegationEntry,
|
||||||
|
ErrorClaim,
|
||||||
|
Oversight,
|
||||||
|
TaskClaim,
|
||||||
|
_b64url_encode,
|
||||||
|
decode_jws,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
from .verify import ACTVerifier
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestVector:
|
||||||
|
"""A single test vector."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
description: str
|
||||||
|
valid: bool
|
||||||
|
expected_exception: type[Exception] | None = None
|
||||||
|
compact: str = ""
|
||||||
|
record: ACTMandate | ACTRecord | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def generate_vectors() -> tuple[list[TestVector], dict[str, Any]]:
|
||||||
|
"""Generate all Appendix B test vectors.
|
||||||
|
|
||||||
|
Returns a list of TestVector objects and a context dict containing
|
||||||
|
keys and other state needed for validation.
|
||||||
|
|
||||||
|
Reference: ACT Appendix B.
|
||||||
|
"""
|
||||||
|
# Fixed timestamp for deterministic vectors
|
||||||
|
base_time = 1772064000
|
||||||
|
|
||||||
|
# Generate key pairs for test agents
|
||||||
|
iss_priv, iss_pub = generate_ed25519_keypair()
|
||||||
|
sub_priv, sub_pub = generate_ed25519_keypair()
|
||||||
|
agent_c_priv, agent_c_pub = generate_ed25519_keypair()
|
||||||
|
|
||||||
|
# Fixed JTIs for cross-referencing
|
||||||
|
jti_b1 = "550e8400-e29b-41d4-a716-446655440001"
|
||||||
|
jti_b2 = "550e8400-e29b-41d4-a716-446655440002"
|
||||||
|
jti_b3_parent1 = "550e8400-e29b-41d4-a716-446655440003"
|
||||||
|
jti_b3_parent2 = "550e8400-e29b-41d4-a716-446655440004"
|
||||||
|
jti_b3 = "550e8400-e29b-41d4-a716-446655440005"
|
||||||
|
jti_b4 = "550e8400-e29b-41d4-a716-446655440006"
|
||||||
|
jti_b5 = "550e8400-e29b-41d4-a716-446655440007"
|
||||||
|
wid = "a0b1c2d3-e4f5-6789-abcd-ef0123456789"
|
||||||
|
|
||||||
|
# Key registry
|
||||||
|
registry = KeyRegistry()
|
||||||
|
registry.register("iss-key", iss_pub)
|
||||||
|
registry.register("sub-key", sub_pub)
|
||||||
|
registry.register("agent-c-key", agent_c_pub)
|
||||||
|
|
||||||
|
resolver = ACTKeyResolver(registry=registry)
|
||||||
|
|
||||||
|
vectors: list[TestVector] = []
|
||||||
|
compacts: dict[str, str] = {} # jti → compact for delegation refs
|
||||||
|
|
||||||
|
# --- B.1: Phase 1 — Root mandate, Tier 1, Ed25519, no delegation ---
|
||||||
|
mandate_b1 = ACTMandate(
|
||||||
|
alg="EdDSA",
|
||||||
|
kid="iss-key",
|
||||||
|
iss="agent-issuer",
|
||||||
|
sub="agent-subject",
|
||||||
|
aud=["agent-subject", "https://ledger.example.com"],
|
||||||
|
iat=base_time,
|
||||||
|
exp=base_time + 900,
|
||||||
|
jti=jti_b1,
|
||||||
|
wid=wid,
|
||||||
|
task=TaskClaim(
|
||||||
|
purpose="validate_data",
|
||||||
|
data_sensitivity="restricted",
|
||||||
|
),
|
||||||
|
cap=[
|
||||||
|
Capability(action="read.data", constraints={"max_records": 10}),
|
||||||
|
Capability(action="write.result"),
|
||||||
|
],
|
||||||
|
delegation=Delegation(depth=0, max_depth=2, chain=[]),
|
||||||
|
)
|
||||||
|
mandate_b1.validate()
|
||||||
|
sig_b1 = crypto_sign(iss_priv, mandate_b1.signing_input())
|
||||||
|
compact_b1 = encode_jws(mandate_b1, sig_b1)
|
||||||
|
compacts[jti_b1] = compact_b1
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.1",
|
||||||
|
description="Phase 1 ACT — root mandate, Tier 1 (Ed25519), no delegation",
|
||||||
|
valid=True,
|
||||||
|
compact=compact_b1,
|
||||||
|
record=mandate_b1,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.2: Phase 2 — Completed execution from B.1 ---
|
||||||
|
record_b2, compact_b2 = transition_to_record(
|
||||||
|
mandate_b1,
|
||||||
|
sub_kid="sub-key",
|
||||||
|
sub_private_key=sub_priv,
|
||||||
|
exec_act="read.data",
|
||||||
|
pred=[],
|
||||||
|
exec_ts=base_time + 300,
|
||||||
|
status="completed",
|
||||||
|
inp_hash=b64url_sha256(b"test input data"),
|
||||||
|
out_hash=b64url_sha256(b"test output data"),
|
||||||
|
)
|
||||||
|
compacts[jti_b2] = compact_b2
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.2",
|
||||||
|
description="Phase 2 ACT — completed execution, transition from B.1 mandate",
|
||||||
|
valid=True,
|
||||||
|
compact=compact_b2,
|
||||||
|
record=record_b2,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.3: Phase 2 — Fan-in, two parent jti values ---
|
||||||
|
# Create two parent records first
|
||||||
|
parent1_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=jti_b3_parent1, wid=wid,
|
||||||
|
task=TaskClaim(purpose="branch_a"),
|
||||||
|
cap=[Capability(action="compute.result")],
|
||||||
|
delegation=Delegation(depth=0, max_depth=1, chain=[]),
|
||||||
|
)
|
||||||
|
sig_p1 = crypto_sign(iss_priv, parent1_mandate.signing_input())
|
||||||
|
compact_p1 = encode_jws(parent1_mandate, sig_p1)
|
||||||
|
|
||||||
|
parent1_record, parent1_compact = transition_to_record(
|
||||||
|
parent1_mandate, sub_kid="sub-key", sub_private_key=sub_priv,
|
||||||
|
exec_act="compute.result", pred=[], exec_ts=base_time + 100,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
parent2_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=jti_b3_parent2, wid=wid,
|
||||||
|
task=TaskClaim(purpose="branch_b"),
|
||||||
|
cap=[Capability(action="compute.result")],
|
||||||
|
delegation=Delegation(depth=0, max_depth=1, chain=[]),
|
||||||
|
)
|
||||||
|
sig_p2 = crypto_sign(iss_priv, parent2_mandate.signing_input())
|
||||||
|
compact_p2 = encode_jws(parent2_mandate, sig_p2)
|
||||||
|
|
||||||
|
parent2_record, parent2_compact = transition_to_record(
|
||||||
|
parent2_mandate, sub_kid="sub-key", sub_private_key=sub_priv,
|
||||||
|
exec_act="compute.result", pred=[], exec_ts=base_time + 150,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Fan-in record depends on both parents
|
||||||
|
fanin_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=jti_b3, wid=wid,
|
||||||
|
task=TaskClaim(purpose="merge_results"),
|
||||||
|
cap=[Capability(action="compute.result")],
|
||||||
|
delegation=Delegation(depth=0, max_depth=1, chain=[]),
|
||||||
|
)
|
||||||
|
sig_fi = crypto_sign(iss_priv, fanin_mandate.signing_input())
|
||||||
|
|
||||||
|
fanin_record, fanin_compact = transition_to_record(
|
||||||
|
fanin_mandate, sub_kid="sub-key", sub_private_key=sub_priv,
|
||||||
|
exec_act="compute.result",
|
||||||
|
pred=[jti_b3_parent1, jti_b3_parent2],
|
||||||
|
exec_ts=base_time + 200,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.3",
|
||||||
|
description="Phase 2 ACT — fan-in, two predecessor jti values from parallel branches",
|
||||||
|
valid=True,
|
||||||
|
compact=fanin_compact,
|
||||||
|
record=fanin_record,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.4: Phase 1 — Delegated mandate (depth=1) ---
|
||||||
|
delegated_b4, _ = create_delegated_mandate(
|
||||||
|
parent_mandate=mandate_b1,
|
||||||
|
parent_compact=compact_b1,
|
||||||
|
delegator_private_key=iss_priv,
|
||||||
|
sub="agent-c",
|
||||||
|
kid="iss-key",
|
||||||
|
iss="agent-issuer",
|
||||||
|
aud="agent-c",
|
||||||
|
iat=base_time + 10,
|
||||||
|
exp=base_time + 600,
|
||||||
|
jti=jti_b4,
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="delegated_read"),
|
||||||
|
)
|
||||||
|
sig_b4 = crypto_sign(iss_priv, delegated_b4.signing_input())
|
||||||
|
compact_b4 = encode_jws(delegated_b4, sig_b4)
|
||||||
|
compacts[jti_b4] = compact_b4
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.4",
|
||||||
|
description="Phase 1 ACT — delegated mandate (depth=1), chain entry with sig",
|
||||||
|
valid=True,
|
||||||
|
compact=compact_b4,
|
||||||
|
record=delegated_b4,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.5: Phase 2 — Delegated execution record ---
|
||||||
|
record_b5, compact_b5 = transition_to_record(
|
||||||
|
delegated_b4,
|
||||||
|
sub_kid="agent-c-key",
|
||||||
|
sub_private_key=agent_c_priv,
|
||||||
|
exec_act="read.data",
|
||||||
|
pred=[],
|
||||||
|
exec_ts=base_time + 350,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.5",
|
||||||
|
description="Phase 2 ACT — delegated execution record",
|
||||||
|
valid=True,
|
||||||
|
compact=compact_b5,
|
||||||
|
record=record_b5,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.6: del.depth > del.max_depth → ACTDelegationError ---
|
||||||
|
bad_depth_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="bad_depth"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
delegation=Delegation(depth=3, max_depth=2, chain=[
|
||||||
|
DelegationEntry(delegator="a", jti="j1", sig="sig1"),
|
||||||
|
DelegationEntry(delegator="b", jti="j2", sig="sig2"),
|
||||||
|
DelegationEntry(delegator="c", jti="j3", sig="sig3"),
|
||||||
|
]),
|
||||||
|
)
|
||||||
|
sig_b6 = crypto_sign(iss_priv, bad_depth_mandate.signing_input())
|
||||||
|
compact_b6 = encode_jws(bad_depth_mandate, sig_b6)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.6",
|
||||||
|
description="del.depth > del.max_depth → ACTDelegationError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTDelegationError,
|
||||||
|
compact=compact_b6,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.7: cap escalation in delegated ACT → ACTPrivilegeEscalationError ---
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.7",
|
||||||
|
description="cap escalation in delegated ACT → ACTPrivilegeEscalationError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTPrivilegeEscalationError,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.8: exec_act not in cap → ACTCapabilityError ---
|
||||||
|
bad_exec_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="bad_exec"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
delegation=Delegation(depth=0, max_depth=1, chain=[]),
|
||||||
|
)
|
||||||
|
sig_b8m = crypto_sign(iss_priv, bad_exec_mandate.signing_input())
|
||||||
|
|
||||||
|
# Manually construct Phase 2 with wrong exec_act
|
||||||
|
bad_exec_record = ACTRecord(
|
||||||
|
alg="EdDSA", kid="sub-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=bad_exec_mandate.jti,
|
||||||
|
task=TaskClaim(purpose="bad_exec"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
exec_act="delete.everything",
|
||||||
|
pred=[], exec_ts=base_time + 100, status="completed",
|
||||||
|
)
|
||||||
|
sig_b8 = crypto_sign(sub_priv, bad_exec_record.signing_input())
|
||||||
|
compact_b8 = encode_jws(bad_exec_record, sig_b8)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.8",
|
||||||
|
description="exec_act not in cap → ACTCapabilityError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTCapabilityError,
|
||||||
|
compact=compact_b8,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.9: DAG cycle (pred references own jti) → ACTDAGError ---
|
||||||
|
cycle_jti = str(uuid.uuid4())
|
||||||
|
cycle_record = ACTRecord(
|
||||||
|
alg="EdDSA", kid="sub-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=cycle_jti,
|
||||||
|
task=TaskClaim(purpose="cycle_test"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
exec_act="read.data",
|
||||||
|
pred=[cycle_jti],
|
||||||
|
exec_ts=base_time + 100, status="completed",
|
||||||
|
)
|
||||||
|
sig_b9 = crypto_sign(sub_priv, cycle_record.signing_input())
|
||||||
|
compact_b9 = encode_jws(cycle_record, sig_b9)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.9",
|
||||||
|
description="DAG cycle (pred references own jti) → ACTDAGError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTDAGError,
|
||||||
|
compact=compact_b9,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.10: Missing parent jti in DAG → ACTDAGError ---
|
||||||
|
missing_parent_record = ACTRecord(
|
||||||
|
alg="EdDSA", kid="sub-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="missing_parent"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
exec_act="read.data",
|
||||||
|
pred=["nonexistent-parent-jti"],
|
||||||
|
exec_ts=base_time + 100, status="completed",
|
||||||
|
)
|
||||||
|
sig_b10 = crypto_sign(sub_priv, missing_parent_record.signing_input())
|
||||||
|
compact_b10 = encode_jws(missing_parent_record, sig_b10)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.10",
|
||||||
|
description="Missing parent jti in DAG → ACTDAGError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTDAGError,
|
||||||
|
compact=compact_b10,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.11: Tampered payload (bit flip) → ACTSignatureError ---
|
||||||
|
# Take a valid compact and flip a byte in the payload
|
||||||
|
parts = compact_b1.split(".")
|
||||||
|
payload_bytes = bytearray(parts[1].encode("ascii"))
|
||||||
|
# Flip a character in the payload
|
||||||
|
flip_idx = len(payload_bytes) // 2
|
||||||
|
payload_bytes[flip_idx] = (payload_bytes[flip_idx] + 1) % 128
|
||||||
|
if payload_bytes[flip_idx] == 0:
|
||||||
|
payload_bytes[flip_idx] = 65 # 'A'
|
||||||
|
tampered_compact = f"{parts[0]}.{payload_bytes.decode('ascii')}.{parts[2]}"
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.11",
|
||||||
|
description="Tampered payload (bit flip in claims) → ACTSignatureError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTSignatureError,
|
||||||
|
compact=tampered_compact,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.12: Expired token → ACTExpiredError ---
|
||||||
|
expired_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=base_time - 3600,
|
||||||
|
exp=base_time - 2700, # expired 45 minutes ago
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="expired_test"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
sig_b12 = crypto_sign(iss_priv, expired_mandate.signing_input())
|
||||||
|
compact_b12 = encode_jws(expired_mandate, sig_b12)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.12",
|
||||||
|
description="Expired token → ACTExpiredError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTExpiredError,
|
||||||
|
compact=compact_b12,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.13: Wrong audience → ACTAudienceMismatchError ---
|
||||||
|
wrong_aud_mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="wrong-agent",
|
||||||
|
aud="wrong-agent",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="wrong_aud_test"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
sig_b13 = crypto_sign(iss_priv, wrong_aud_mandate.signing_input())
|
||||||
|
compact_b13 = encode_jws(wrong_aud_mandate, sig_b13)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.13",
|
||||||
|
description="Wrong audience → ACTAudienceMismatchError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTAudienceMismatchError,
|
||||||
|
compact=compact_b13,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.14: Phase 2 re-signed by iss key instead of sub → ACTSignatureError ---
|
||||||
|
record_b14 = ACTRecord.from_mandate(
|
||||||
|
mandate_b1,
|
||||||
|
kid="sub-key", # claims to be sub's key
|
||||||
|
exec_act="read.data",
|
||||||
|
pred=[], exec_ts=base_time + 300, status="completed",
|
||||||
|
)
|
||||||
|
# But signed with ISS's private key (wrong signer)
|
||||||
|
sig_b14 = crypto_sign(iss_priv, record_b14.signing_input())
|
||||||
|
compact_b14 = encode_jws(record_b14, sig_b14)
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.14",
|
||||||
|
description="Phase 2 re-signed by iss key instead of sub → ACTSignatureError",
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTSignatureError,
|
||||||
|
compact=compact_b14,
|
||||||
|
))
|
||||||
|
|
||||||
|
# --- B.15: Algorithm "none" → ACTValidationError ---
|
||||||
|
# Manually construct a JWS with alg: none
|
||||||
|
import json
|
||||||
|
import base64
|
||||||
|
|
||||||
|
none_header = base64.urlsafe_b64encode(
|
||||||
|
json.dumps({"alg": "none", "typ": "act+jwt", "kid": "k"}, separators=(",", ":")).encode()
|
||||||
|
).rstrip(b"=").decode()
|
||||||
|
none_payload = base64.urlsafe_b64encode(
|
||||||
|
json.dumps({"iss": "a", "sub": "b"}, separators=(",", ":")).encode()
|
||||||
|
).rstrip(b"=").decode()
|
||||||
|
compact_b15 = f"{none_header}.{none_payload}."
|
||||||
|
|
||||||
|
vectors.append(TestVector(
|
||||||
|
id="B.15",
|
||||||
|
description='Algorithm "none" → ACTValidationError',
|
||||||
|
valid=False,
|
||||||
|
expected_exception=ACTValidationError,
|
||||||
|
compact=compact_b15,
|
||||||
|
))
|
||||||
|
|
||||||
|
context = {
|
||||||
|
"iss_priv": iss_priv,
|
||||||
|
"iss_pub": iss_pub,
|
||||||
|
"sub_priv": sub_priv,
|
||||||
|
"sub_pub": sub_pub,
|
||||||
|
"agent_c_priv": agent_c_priv,
|
||||||
|
"agent_c_pub": agent_c_pub,
|
||||||
|
"registry": registry,
|
||||||
|
"resolver": resolver,
|
||||||
|
"base_time": base_time,
|
||||||
|
"compacts": compacts,
|
||||||
|
"parent1_record": parent1_record,
|
||||||
|
"parent2_record": parent2_record,
|
||||||
|
"mandate_b1": mandate_b1,
|
||||||
|
}
|
||||||
|
|
||||||
|
return vectors, context
|
||||||
|
|
||||||
|
|
||||||
|
def validate_vectors() -> bool:
|
||||||
|
"""Run all test vectors and validate results.
|
||||||
|
|
||||||
|
Returns True if all vectors pass.
|
||||||
|
|
||||||
|
Reference: ACT Appendix B.
|
||||||
|
"""
|
||||||
|
vectors, ctx = generate_vectors()
|
||||||
|
resolver = ctx["resolver"]
|
||||||
|
base_time = ctx["base_time"]
|
||||||
|
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
key_resolver=resolver,
|
||||||
|
verifier_id="agent-subject",
|
||||||
|
trusted_issuers={"agent-issuer"},
|
||||||
|
)
|
||||||
|
|
||||||
|
passed = 0
|
||||||
|
failed = 0
|
||||||
|
|
||||||
|
for v in vectors:
|
||||||
|
try:
|
||||||
|
if v.id == "B.7":
|
||||||
|
# Special case: test cap escalation during delegation creation
|
||||||
|
try:
|
||||||
|
from .delegation import verify_capability_subset
|
||||||
|
verify_capability_subset(
|
||||||
|
[Capability(action="read.data", constraints={"max_records": 10})],
|
||||||
|
[Capability(action="read.data", constraints={"max_records": 100})],
|
||||||
|
)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
except ACTPrivilegeEscalationError:
|
||||||
|
print(f" PASS {v.id}: {v.description}")
|
||||||
|
passed += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
if v.valid:
|
||||||
|
# Valid vectors: should parse and verify without error
|
||||||
|
header, claims, sig, si = decode_jws(v.compact)
|
||||||
|
kid = header["kid"]
|
||||||
|
pub = resolver.resolve(kid, header=header)
|
||||||
|
crypto_verify(pub, sig, si)
|
||||||
|
print(f" PASS {v.id}: {v.description}")
|
||||||
|
passed += 1
|
||||||
|
else:
|
||||||
|
# Invalid vectors: should raise the expected exception
|
||||||
|
try:
|
||||||
|
if v.expected_exception == ACTDelegationError:
|
||||||
|
header, claims, sig, si = decode_jws(v.compact)
|
||||||
|
kid = header["kid"]
|
||||||
|
pub = resolver.resolve(kid, header=header)
|
||||||
|
crypto_verify(pub, sig, si)
|
||||||
|
# Parse and check delegation
|
||||||
|
from .token import ACTMandate as _M
|
||||||
|
m = _M.from_claims(header, claims)
|
||||||
|
from .delegation import verify_delegation_chain
|
||||||
|
verify_delegation_chain(m, lambda d: resolver.resolve(d))
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTCapabilityError:
|
||||||
|
header, claims, sig, si = decode_jws(v.compact)
|
||||||
|
kid = header["kid"]
|
||||||
|
pub = resolver.resolve(kid, header=header)
|
||||||
|
crypto_verify(pub, sig, si)
|
||||||
|
r = ACTRecord.from_claims(header, claims)
|
||||||
|
cap_actions = {c.action for c in r.cap}
|
||||||
|
if r.exec_act not in cap_actions:
|
||||||
|
raise ACTCapabilityError("exec_act mismatch")
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTDAGError:
|
||||||
|
header, claims, sig, si = decode_jws(v.compact)
|
||||||
|
kid = header["kid"]
|
||||||
|
pub = resolver.resolve(kid, header=header)
|
||||||
|
crypto_verify(pub, sig, si)
|
||||||
|
r = ACTRecord.from_claims(header, claims)
|
||||||
|
ledger = ACTLedger()
|
||||||
|
validate_dag(r, ledger)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTExpiredError:
|
||||||
|
verifier.verify_mandate(v.compact, check_sub=False)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTAudienceMismatchError:
|
||||||
|
verifier.verify_mandate(
|
||||||
|
v.compact,
|
||||||
|
now=base_time + 100,
|
||||||
|
check_sub=False,
|
||||||
|
)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTSignatureError:
|
||||||
|
header, claims, sig, si = decode_jws(v.compact)
|
||||||
|
kid = header["kid"]
|
||||||
|
pub = resolver.resolve(kid, header=header)
|
||||||
|
crypto_verify(pub, sig, si)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
elif v.expected_exception == ACTValidationError:
|
||||||
|
decode_jws(v.compact)
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}")
|
||||||
|
failed += 1
|
||||||
|
else:
|
||||||
|
print(f" SKIP {v.id}: Unknown expected exception type")
|
||||||
|
failed += 1
|
||||||
|
except Exception as e:
|
||||||
|
if isinstance(e, v.expected_exception):
|
||||||
|
print(f" PASS {v.id}: {v.description}")
|
||||||
|
passed += 1
|
||||||
|
else:
|
||||||
|
print(f" FAIL {v.id}: Expected {v.expected_exception.__name__}, "
|
||||||
|
f"got {type(e).__name__}: {e}")
|
||||||
|
failed += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" FAIL {v.id}: Unexpected error: {type(e).__name__}: {e}")
|
||||||
|
failed += 1
|
||||||
|
|
||||||
|
print(f"\nResults: {passed} passed, {failed} failed out of {len(vectors)}")
|
||||||
|
return failed == 0
|
||||||
323
workspace/packages/act/act/verify.py
Normal file
323
workspace/packages/act/act/verify.py
Normal file
@@ -0,0 +1,323 @@
|
|||||||
|
"""ACT unified verification entry point.
|
||||||
|
|
||||||
|
Provides ACTVerifier with verify_mandate (Phase 1) and verify_record
|
||||||
|
(Phase 2) methods implementing the full verification procedures.
|
||||||
|
|
||||||
|
Reference: ACT §8 (Verification Procedure).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .crypto import ACTKeyResolver, PublicKey, verify as crypto_verify
|
||||||
|
from .dag import ACTStore, validate_dag
|
||||||
|
from .delegation import verify_delegation_chain
|
||||||
|
from .errors import (
|
||||||
|
ACTAudienceMismatchError,
|
||||||
|
ACTCapabilityError,
|
||||||
|
ACTExpiredError,
|
||||||
|
ACTPhaseError,
|
||||||
|
ACTSignatureError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
from .token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
decode_jws,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Default clock skew tolerance for exp check — ACT §8.1 step 6.
|
||||||
|
DEFAULT_EXP_CLOCK_SKEW: int = 300 # 5 minutes
|
||||||
|
|
||||||
|
# Default clock skew tolerance for iat future check — ACT §8.1 step 7.
|
||||||
|
DEFAULT_IAT_FUTURE_TOLERANCE: int = 30 # 30 seconds
|
||||||
|
|
||||||
|
|
||||||
|
class ACTVerifier:
|
||||||
|
"""Unified ACT verification entry point.
|
||||||
|
|
||||||
|
Implements the full verification procedure for both Phase 1
|
||||||
|
(Authorization Mandate) and Phase 2 (Execution Record) tokens.
|
||||||
|
|
||||||
|
Reference: ACT §8.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
key_resolver: ACTKeyResolver,
|
||||||
|
*,
|
||||||
|
verifier_id: str | None = None,
|
||||||
|
trusted_issuers: set[str] | None = None,
|
||||||
|
exp_clock_skew: int = DEFAULT_EXP_CLOCK_SKEW,
|
||||||
|
iat_future_tolerance: int = DEFAULT_IAT_FUTURE_TOLERANCE,
|
||||||
|
resolve_parent_compact: Any | None = None,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize the verifier.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
key_resolver: Key resolver for all trust tiers.
|
||||||
|
verifier_id: This verifier's own identifier (for aud/sub checks).
|
||||||
|
trusted_issuers: Set of trusted issuer identifiers.
|
||||||
|
If None, iss check is skipped.
|
||||||
|
exp_clock_skew: Maximum clock skew for expiration (seconds).
|
||||||
|
iat_future_tolerance: Maximum future iat tolerance (seconds).
|
||||||
|
resolve_parent_compact: Callback to resolve parent ACT compact
|
||||||
|
form by jti (for delegation chain).
|
||||||
|
"""
|
||||||
|
self._key_resolver = key_resolver
|
||||||
|
self._verifier_id = verifier_id
|
||||||
|
self._trusted_issuers = trusted_issuers
|
||||||
|
self._exp_clock_skew = exp_clock_skew
|
||||||
|
self._iat_future_tolerance = iat_future_tolerance
|
||||||
|
self._resolve_parent_compact = resolve_parent_compact
|
||||||
|
|
||||||
|
def verify_mandate(
|
||||||
|
self,
|
||||||
|
compact: str,
|
||||||
|
*,
|
||||||
|
now: int | None = None,
|
||||||
|
check_aud: bool = True,
|
||||||
|
check_sub: bool = True,
|
||||||
|
) -> ACTMandate:
|
||||||
|
"""Verify a Phase 1 Authorization Mandate.
|
||||||
|
|
||||||
|
Implements ACT §8.1 verification steps 1-13.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
compact: JWS Compact Serialization of the Phase 1 ACT.
|
||||||
|
now: Current time override (for testing). Defaults to time.time().
|
||||||
|
check_aud: Whether to check aud contains verifier_id.
|
||||||
|
check_sub: Whether to check sub matches verifier_id.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Verified ACTMandate.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: Malformed token (steps 2-3, 11).
|
||||||
|
ACTSignatureError: Signature failure (step 5).
|
||||||
|
ACTExpiredError: Token expired (step 6).
|
||||||
|
ACTAudienceMismatchError: Wrong audience (step 8).
|
||||||
|
ACTDelegationError: Invalid delegation chain (step 12).
|
||||||
|
"""
|
||||||
|
current_time = now if now is not None else int(time.time())
|
||||||
|
|
||||||
|
# Step 1: Parse JWS Compact Serialization
|
||||||
|
header, claims, signature, signing_input = decode_jws(compact)
|
||||||
|
|
||||||
|
# Steps 2-3: typ and alg checked by decode_jws
|
||||||
|
|
||||||
|
# Phase check: must NOT have exec_act
|
||||||
|
if "exec_act" in claims:
|
||||||
|
raise ACTPhaseError(
|
||||||
|
"Token contains exec_act — this is a Phase 2 token, "
|
||||||
|
"not a Phase 1 mandate"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 4: Resolve public key for kid
|
||||||
|
kid = header["kid"]
|
||||||
|
public_key = self._key_resolver.resolve(kid, header=header)
|
||||||
|
|
||||||
|
# Step 5: Verify JWS signature
|
||||||
|
crypto_verify(public_key, signature, signing_input)
|
||||||
|
|
||||||
|
# Build mandate object for claim validation
|
||||||
|
mandate = ACTMandate.from_claims(header, claims)
|
||||||
|
|
||||||
|
# Step 6: Check exp not passed
|
||||||
|
if current_time > mandate.exp + self._exp_clock_skew:
|
||||||
|
raise ACTExpiredError(
|
||||||
|
f"Token expired: exp={mandate.exp}, "
|
||||||
|
f"now={current_time}, skew={self._exp_clock_skew}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 7: Check iat not unreasonably future
|
||||||
|
if mandate.iat > current_time + self._iat_future_tolerance:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Token iat is too far in the future: iat={mandate.iat}, "
|
||||||
|
f"now={current_time}, tolerance={self._iat_future_tolerance}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 8: Check aud contains verifier's identity
|
||||||
|
if check_aud and self._verifier_id is not None:
|
||||||
|
aud = mandate.aud
|
||||||
|
if isinstance(aud, str):
|
||||||
|
aud_list = [aud]
|
||||||
|
else:
|
||||||
|
aud_list = aud
|
||||||
|
if self._verifier_id not in aud_list:
|
||||||
|
raise ACTAudienceMismatchError(
|
||||||
|
f"Verifier id {self._verifier_id!r} not in aud: {aud_list}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 9: Check iss is trusted
|
||||||
|
if self._trusted_issuers is not None:
|
||||||
|
if mandate.iss not in self._trusted_issuers:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Issuer {mandate.iss!r} is not trusted"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 10: Check sub matches verifier's identity
|
||||||
|
if check_sub and self._verifier_id is not None:
|
||||||
|
if mandate.sub != self._verifier_id:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"sub {mandate.sub!r} does not match verifier id "
|
||||||
|
f"{self._verifier_id!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 11: Check all required claims (done by from_claims + validate)
|
||||||
|
mandate.validate()
|
||||||
|
|
||||||
|
# Step 12: Verify delegation chain
|
||||||
|
if mandate.delegation is not None and mandate.delegation.chain:
|
||||||
|
def _resolve_key(delegator_id: str) -> PublicKey:
|
||||||
|
return self._key_resolver.resolve(delegator_id)
|
||||||
|
|
||||||
|
verify_delegation_chain(
|
||||||
|
mandate,
|
||||||
|
resolve_key=_resolve_key,
|
||||||
|
resolve_parent_compact=self._resolve_parent_compact,
|
||||||
|
)
|
||||||
|
|
||||||
|
return mandate
|
||||||
|
|
||||||
|
def verify_record(
|
||||||
|
self,
|
||||||
|
compact: str,
|
||||||
|
store: ACTStore | None = None,
|
||||||
|
*,
|
||||||
|
now: int | None = None,
|
||||||
|
check_aud: bool = True,
|
||||||
|
) -> ACTRecord:
|
||||||
|
"""Verify a Phase 2 Execution Record.
|
||||||
|
|
||||||
|
Implements all Phase 1 steps (§8.1) plus Phase 2 steps (§8.2).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
compact: JWS Compact Serialization of the Phase 2 ACT.
|
||||||
|
store: ACT store for DAG validation. If None, DAG checks
|
||||||
|
are limited to capability consistency only.
|
||||||
|
now: Current time override (for testing).
|
||||||
|
check_aud: Whether to check aud contains verifier_id.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Verified ACTRecord.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ACTValidationError: Malformed token.
|
||||||
|
ACTSignatureError: Signature failure or wrong signer.
|
||||||
|
ACTExpiredError: Token expired.
|
||||||
|
ACTAudienceMismatchError: Wrong audience.
|
||||||
|
ACTCapabilityError: exec_act not in cap.
|
||||||
|
ACTDAGError: DAG validation failure.
|
||||||
|
"""
|
||||||
|
current_time = now if now is not None else int(time.time())
|
||||||
|
|
||||||
|
# Step 1: Parse JWS
|
||||||
|
header, claims, signature, signing_input = decode_jws(compact)
|
||||||
|
|
||||||
|
# Phase check
|
||||||
|
if "exec_act" not in claims:
|
||||||
|
raise ACTPhaseError(
|
||||||
|
"Token does not contain exec_act — this is a Phase 1 "
|
||||||
|
"mandate, not a Phase 2 record"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 4: Resolve key — in Phase 2, kid MUST be sub's key
|
||||||
|
kid = header["kid"]
|
||||||
|
public_key = self._key_resolver.resolve(kid, header=header)
|
||||||
|
|
||||||
|
# Step 5: Verify JWS signature (Step 17: by sub's key)
|
||||||
|
crypto_verify(public_key, signature, signing_input)
|
||||||
|
|
||||||
|
# Build record
|
||||||
|
record = ACTRecord.from_claims(header, claims)
|
||||||
|
|
||||||
|
# Step 6: Check exp
|
||||||
|
if current_time > record.exp + self._exp_clock_skew:
|
||||||
|
raise ACTExpiredError(
|
||||||
|
f"Token expired: exp={record.exp}, "
|
||||||
|
f"now={current_time}, skew={self._exp_clock_skew}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 7: iat future check
|
||||||
|
if record.iat > current_time + self._iat_future_tolerance:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Token iat is too far in the future: iat={record.iat}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 8: aud check
|
||||||
|
if check_aud and self._verifier_id is not None:
|
||||||
|
aud = record.aud
|
||||||
|
if isinstance(aud, str):
|
||||||
|
aud_list = [aud]
|
||||||
|
else:
|
||||||
|
aud_list = aud
|
||||||
|
if self._verifier_id not in aud_list:
|
||||||
|
raise ACTAudienceMismatchError(
|
||||||
|
f"Verifier id {self._verifier_id!r} not in aud: {aud_list}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 9: iss trust check
|
||||||
|
if self._trusted_issuers is not None:
|
||||||
|
if record.iss not in self._trusted_issuers:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"Issuer {record.iss!r} is not trusted"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 11: required claims validation
|
||||||
|
record.validate()
|
||||||
|
|
||||||
|
# Step 12: delegation chain
|
||||||
|
if record.delegation is not None and record.delegation.chain:
|
||||||
|
def _resolve_key(delegator_id: str) -> PublicKey:
|
||||||
|
return self._key_resolver.resolve(delegator_id)
|
||||||
|
|
||||||
|
# Reuse verify_delegation_chain with ACTRecord fields
|
||||||
|
# (it accesses .delegation which exists on ACTRecord too)
|
||||||
|
from .delegation import verify_delegation_chain as _vdc
|
||||||
|
# Create a temporary mandate-like view — delegation chain
|
||||||
|
# verification only needs delegation and cap fields
|
||||||
|
mandate_view = ACTMandate(
|
||||||
|
alg=record.alg, kid=record.kid,
|
||||||
|
iss=record.iss, sub=record.sub, aud=record.aud,
|
||||||
|
iat=record.iat, exp=record.exp, jti=record.jti,
|
||||||
|
task=record.task, cap=record.cap,
|
||||||
|
delegation=record.delegation,
|
||||||
|
)
|
||||||
|
_vdc(
|
||||||
|
mandate_view,
|
||||||
|
resolve_key=_resolve_key,
|
||||||
|
resolve_parent_compact=self._resolve_parent_compact,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Phase 2 step 13: exec_act matches cap[].action
|
||||||
|
cap_actions = {c.action for c in record.cap}
|
||||||
|
if record.exec_act not in cap_actions:
|
||||||
|
raise ACTCapabilityError(
|
||||||
|
f"exec_act {record.exec_act!r} does not match any "
|
||||||
|
f"cap[].action: {sorted(cap_actions)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Phase 2 step 14: DAG validation
|
||||||
|
if store is not None:
|
||||||
|
validate_dag(record, store)
|
||||||
|
|
||||||
|
# Phase 2 step 15: exec_ts checks
|
||||||
|
if record.exec_ts < record.iat:
|
||||||
|
raise ACTValidationError(
|
||||||
|
f"exec_ts {record.exec_ts} is before iat {record.iat}"
|
||||||
|
)
|
||||||
|
if record.exec_ts > record.exp:
|
||||||
|
logger.warning(
|
||||||
|
"exec_ts %d is after exp %d — execution after mandate expiry",
|
||||||
|
record.exec_ts, record.exp,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Phase 2 step 16: status validation (done by record.validate())
|
||||||
|
|
||||||
|
return record
|
||||||
174
workspace/packages/act/bench/bench_act.py
Normal file
174
workspace/packages/act/bench/bench_act.py
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
"""ACT performance benchmarks.
|
||||||
|
|
||||||
|
Measures Phase 1 creation time (construct + sign + encode) against
|
||||||
|
the 500µs target from the specification.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
import statistics
|
||||||
|
|
||||||
|
from act import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
TaskClaim,
|
||||||
|
encode_jws,
|
||||||
|
decode_jws,
|
||||||
|
generate_ed25519_keypair,
|
||||||
|
generate_p256_keypair,
|
||||||
|
sign,
|
||||||
|
verify,
|
||||||
|
transition_to_record,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def bench_phase1_ed25519(n: int = 10000) -> None:
|
||||||
|
"""Benchmark Phase 1 creation with Ed25519."""
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
|
||||||
|
# Warmup
|
||||||
|
for _ in range(100):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
sig = sign(priv, m.signing_input())
|
||||||
|
encode_jws(m, sig)
|
||||||
|
|
||||||
|
times = []
|
||||||
|
for _ in range(n):
|
||||||
|
start = time.perf_counter()
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="benchmark"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
sig = sign(priv, m.signing_input())
|
||||||
|
encode_jws(m, sig)
|
||||||
|
elapsed = time.perf_counter() - start
|
||||||
|
times.append(elapsed * 1_000_000) # µs
|
||||||
|
|
||||||
|
mean = statistics.mean(times)
|
||||||
|
median = statistics.median(times)
|
||||||
|
p99 = sorted(times)[int(n * 0.99)]
|
||||||
|
print(f"Phase 1 Ed25519 (n={n}):")
|
||||||
|
print(f" Mean: {mean:.1f} µs")
|
||||||
|
print(f" Median: {median:.1f} µs")
|
||||||
|
print(f" P99: {p99:.1f} µs")
|
||||||
|
print(f" Target: <= 500 µs {'PASS' if mean <= 500 else 'FAIL'}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def bench_phase1_p256(n: int = 5000) -> None:
|
||||||
|
"""Benchmark Phase 1 creation with P-256."""
|
||||||
|
priv, pub = generate_p256_keypair()
|
||||||
|
|
||||||
|
for _ in range(50):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="ES256", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
sig = sign(priv, m.signing_input())
|
||||||
|
encode_jws(m, sig)
|
||||||
|
|
||||||
|
times = []
|
||||||
|
for _ in range(n):
|
||||||
|
start = time.perf_counter()
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="ES256", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="benchmark"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
sig = sign(priv, m.signing_input())
|
||||||
|
encode_jws(m, sig)
|
||||||
|
elapsed = time.perf_counter() - start
|
||||||
|
times.append(elapsed * 1_000_000)
|
||||||
|
|
||||||
|
mean = statistics.mean(times)
|
||||||
|
median = statistics.median(times)
|
||||||
|
p99 = sorted(times)[int(n * 0.99)]
|
||||||
|
print(f"Phase 1 ES256 (n={n}):")
|
||||||
|
print(f" Mean: {mean:.1f} µs")
|
||||||
|
print(f" Median: {median:.1f} µs")
|
||||||
|
print(f" P99: {p99:.1f} µs")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def bench_phase2_transition(n: int = 5000) -> None:
|
||||||
|
"""Benchmark Phase 1 -> Phase 2 transition."""
|
||||||
|
iss_priv, _ = generate_ed25519_keypair()
|
||||||
|
sub_priv, _ = generate_ed25519_keypair()
|
||||||
|
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Warmup
|
||||||
|
for _ in range(50):
|
||||||
|
transition_to_record(
|
||||||
|
mandate, sub_kid="sk", sub_private_key=sub_priv,
|
||||||
|
exec_act="x.y", pred=[], status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
times = []
|
||||||
|
for _ in range(n):
|
||||||
|
start = time.perf_counter()
|
||||||
|
transition_to_record(
|
||||||
|
mandate, sub_kid="sk", sub_private_key=sub_priv,
|
||||||
|
exec_act="x.y", pred=[], status="completed",
|
||||||
|
)
|
||||||
|
elapsed = time.perf_counter() - start
|
||||||
|
times.append(elapsed * 1_000_000)
|
||||||
|
|
||||||
|
mean = statistics.mean(times)
|
||||||
|
median = statistics.median(times)
|
||||||
|
print(f"Phase 2 Transition (n={n}):")
|
||||||
|
print(f" Mean: {mean:.1f} µs")
|
||||||
|
print(f" Median: {median:.1f} µs")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def bench_verify(n: int = 5000) -> None:
|
||||||
|
"""Benchmark JWS decode + verify."""
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900, jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
sig = sign(priv, m.signing_input())
|
||||||
|
compact = encode_jws(m, sig)
|
||||||
|
|
||||||
|
# Warmup
|
||||||
|
for _ in range(50):
|
||||||
|
_, _, s, si = decode_jws(compact)
|
||||||
|
verify(pub, s, si)
|
||||||
|
|
||||||
|
times = []
|
||||||
|
for _ in range(n):
|
||||||
|
start = time.perf_counter()
|
||||||
|
_, _, s, si = decode_jws(compact)
|
||||||
|
verify(pub, s, si)
|
||||||
|
elapsed = time.perf_counter() - start
|
||||||
|
times.append(elapsed * 1_000_000)
|
||||||
|
|
||||||
|
mean = statistics.mean(times)
|
||||||
|
median = statistics.median(times)
|
||||||
|
print(f"Decode + Verify (n={n}):")
|
||||||
|
print(f" Mean: {mean:.1f} µs")
|
||||||
|
print(f" Median: {median:.1f} µs")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
bench_phase1_ed25519()
|
||||||
|
bench_phase1_p256()
|
||||||
|
bench_phase2_transition()
|
||||||
|
bench_verify()
|
||||||
194
workspace/packages/act/docs/section-1.5-applicability.md
Normal file
194
workspace/packages/act/docs/section-1.5-applicability.md
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
# Section 1.5: Applicability (for draft-nennemann-act-01)
|
||||||
|
|
||||||
|
Insert after Section 1.4 (Relationship to Related Work).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.5. Applicability
|
||||||
|
|
||||||
|
ACT is designed as a general-purpose primitive for AI agent
|
||||||
|
authorization and execution accountability. While a sibling
|
||||||
|
specification [I-D.nennemann-wimse-ect] profiles execution context
|
||||||
|
tokens specifically for the WIMSE working group's workload identity
|
||||||
|
infrastructure, ACT operates without any shared identity plane. This
|
||||||
|
section identifies deployment contexts where ACT applies independently
|
||||||
|
of WIMSE, and clarifies how ACT complements — rather than competes
|
||||||
|
with — ecosystem-specific agent protocols.
|
||||||
|
|
||||||
|
#### 1.5.1. Model Context Protocol (MCP) Tool-Use Flows
|
||||||
|
|
||||||
|
The Model Context Protocol [MCP-SPEC] defines a client-server
|
||||||
|
interface by which LLM hosts invoke external tools via structured
|
||||||
|
JSON-RPC calls. MCP 2025-11-25 mandates OAuth 2.1 for transport-layer
|
||||||
|
authentication, but provides no mechanism for carrying per-invocation
|
||||||
|
authorization constraints or for producing a tamper-evident record
|
||||||
|
of what arguments were passed and what result was returned.
|
||||||
|
|
||||||
|
ACT addresses this gap as follows: when an MCP host is about to
|
||||||
|
dispatch a tool call on behalf of an agent, it SHOULD issue a Phase 1
|
||||||
|
ACT Mandate encoding the permitted tool name (e.g., as a capability
|
||||||
|
constraint), the declaring scope, and any parameter-level constraints
|
||||||
|
applicable to that invocation. The MCP server, upon receiving the
|
||||||
|
request, MAY validate the ACT Mandate and, upon completing the tool
|
||||||
|
execution, SHOULD transition the token to Phase 2 by appending
|
||||||
|
SHA-256 hashes of the serialized input arguments and the JSON
|
||||||
|
response, then re-sign. The resulting Phase 2 ACT constitutes an
|
||||||
|
unforgeable record that a specific tool was called with specific
|
||||||
|
arguments and returned a specific result, independently of MCP's
|
||||||
|
OAuth layer.
|
||||||
|
|
||||||
|
This integration requires no modification to MCP transport; the ACT
|
||||||
|
SHOULD be carried in the `ACT-Mandate` and `ACT-Record` HTTP headers
|
||||||
|
defined in Section 9.1 of this document.
|
||||||
|
|
||||||
|
#### 1.5.2. OpenAI Agents SDK and Function Calling
|
||||||
|
|
||||||
|
The OpenAI Agents SDK [OPENAI-AGENTS-SDK] enables composition of
|
||||||
|
agents via handoffs — structured transfers of control from one agent
|
||||||
|
to another, each potentially invoking registered function tools. The
|
||||||
|
SDK provides no built-in mechanism for a receiving agent to verify
|
||||||
|
that the handoff was authorized by a named principal, nor for the
|
||||||
|
invoking agent to produce a verifiable record of what functions it
|
||||||
|
called.
|
||||||
|
|
||||||
|
ACT is applicable at the handoff boundary: the orchestrating agent
|
||||||
|
SHOULD issue a Phase 1 ACT Mandate to the receiving agent at the
|
||||||
|
moment of handoff, encoding the permitted function set as
|
||||||
|
capability constraints and the maximum privilege the receiving agent
|
||||||
|
MAY exercise. The receiving agent SHOULD attach its Phase 2 ACT
|
||||||
|
Record to any callback or downstream response, providing the
|
||||||
|
orchestrator with cryptographic evidence of the actions taken. In
|
||||||
|
multi-turn chains involving multiple handoffs, the DAG linkage
|
||||||
|
(Section 7) allows each handoff to be expressed as a parent-child
|
||||||
|
edge, preserving the full causal ordering of the agent invocation
|
||||||
|
sequence.
|
||||||
|
|
||||||
|
Implementations that use the OpenAI function calling API directly,
|
||||||
|
without the Agents SDK, MAY apply ACT at the application layer: the
|
||||||
|
calling process issues a Phase 1 ACT before the function call
|
||||||
|
parameter block is finalized, and the receiving function handler
|
||||||
|
returns a Phase 2 ACT alongside its JSON result.
|
||||||
|
|
||||||
|
#### 1.5.3. LangGraph and LangChain Agent Graphs
|
||||||
|
|
||||||
|
LangGraph [LANGGRAPH] models agent workflows as typed StateGraphs in
|
||||||
|
which nodes represent agent invocations or tool calls and edges
|
||||||
|
represent conditional transitions. The DAG structure of ACT (Section
|
||||||
|
7) is a natural fit for this model: each LangGraph node that performs
|
||||||
|
an observable action corresponds to exactly one ACT task identifier
|
||||||
|
(`tid`), and directed edges in the LangGraph correspond to `pred`
|
||||||
|
(predecessor) references in successor ACTs.
|
||||||
|
|
||||||
|
ACT is applicable at the node boundary: when a LangGraph node
|
||||||
|
dispatches a sub-agent or invokes a tool with side effects, it SHOULD
|
||||||
|
issue a Phase 1 ACT Mandate encoding the node's permitted actions
|
||||||
|
before any external call is made. Upon transition out of the node,
|
||||||
|
a Phase 2 ACT Record SHOULD be produced and attached to the
|
||||||
|
LangGraph state object alongside the node's output. Downstream nodes
|
||||||
|
that fan-in from multiple predecessors MAY retrieve the set of parent
|
||||||
|
ACT identifiers from the shared state to populate their `pred` array,
|
||||||
|
thereby expressing LangGraph's fan-in semantics within the ACT DAG
|
||||||
|
without any additional infrastructure.
|
||||||
|
|
||||||
|
In contrast to LangGraph's built-in state audit trail, which is
|
||||||
|
mutable in-process memory, Phase 2 ACTs are cryptographically signed
|
||||||
|
and portable: they can be exported from a LangGraph run and
|
||||||
|
submitted to an external audit ledger, satisfying compliance
|
||||||
|
requirements that cannot be met by in-process logging alone.
|
||||||
|
|
||||||
|
#### 1.5.4. Google Agent2Agent (A2A) Protocol
|
||||||
|
|
||||||
|
The Agent2Agent protocol [A2A-SPEC] defines a task-oriented JSON-RPC
|
||||||
|
interface for inter-agent communication, with authentication
|
||||||
|
delegated to OAuth 2.0 or API key schemes declared in each agent's
|
||||||
|
Agent Card. A2A provides no mechanism for a receiving agent to
|
||||||
|
verify the authorization provenance of a task request beyond the
|
||||||
|
transport-layer credential, and produces no token that represents
|
||||||
|
the execution of the task in a verifiable, portable form.
|
||||||
|
|
||||||
|
ACT is applicable as a session-layer accountability complement to
|
||||||
|
A2A: a client agent SHOULD include a Phase 1 ACT Mandate in the
|
||||||
|
`metadata` field of the A2A Task object, encoding the task type as
|
||||||
|
a capability constraint and the delegating agent's identity as the
|
||||||
|
ACT issuer. The receiving agent SHOULD validate the Mandate before
|
||||||
|
beginning task execution and SHOULD return a Phase 2 ACT Record
|
||||||
|
as an artifact in the A2A TaskResult, enabling the client agent to
|
||||||
|
retain cryptographic proof of what was executed on its behalf.
|
||||||
|
|
||||||
|
This integration does not require modification to A2A's transport or
|
||||||
|
authentication scheme; ACT and A2A's OAuth credentials operate at
|
||||||
|
independent layers and are not redundant. A2A's credential answers
|
||||||
|
"is this client permitted to contact this server?"; the ACT Mandate
|
||||||
|
answers "is this agent permitted to request this specific task
|
||||||
|
under these constraints?".
|
||||||
|
|
||||||
|
#### 1.5.5. Enterprise Orchestration Without WIMSE (CrewAI, AutoGen)
|
||||||
|
|
||||||
|
Enterprise orchestration frameworks such as CrewAI [CREWAI] and
|
||||||
|
AutoGen [AUTOGEN] deploy multi-agent systems within a single
|
||||||
|
organizational boundary, typically without SPIFFE/SPIRE workload
|
||||||
|
identity infrastructure. In these environments, OAuth Authorization
|
||||||
|
Servers are often unavailable or impractical to deploy for intra-
|
||||||
|
process agent communication.
|
||||||
|
|
||||||
|
ACT is applicable in this context via its Tier 1 (pre-shared key)
|
||||||
|
trust model (Section 5.2): each agent role in a CrewAI Crew or
|
||||||
|
AutoGen ConversableAgent graph is assigned an Ed25519 keypair at
|
||||||
|
instantiation time. The orchestrating agent issues Phase 1 Mandates
|
||||||
|
to worker agents before delegating tasks, constraining each worker
|
||||||
|
to only the tools and actions relevant to its role. Worker agents
|
||||||
|
produce Phase 2 Records on task completion. The resulting ACT chain
|
||||||
|
is exportable as a structured audit trail that satisfies the
|
||||||
|
per-action logging requirements of DORA [DORA] and EU AI Act
|
||||||
|
Article 12 [EUAIA] without requiring shared infrastructure beyond
|
||||||
|
the ability to exchange public keys at deployment time.
|
||||||
|
|
||||||
|
Implementations SHOULD NOT use ACT's self-assertion mode (where an
|
||||||
|
agent issues and records its own mandate without external sign-off)
|
||||||
|
in regulated workflows; at minimum, the orchestrating agent MUST
|
||||||
|
sign the initial Mandate so that accountability is anchored to a
|
||||||
|
principal outside the executing agent.
|
||||||
|
|
||||||
|
#### 1.5.6. Relationship to WIMSE ECT
|
||||||
|
|
||||||
|
Where WIMSE infrastructure is deployed, ACT and the WIMSE Execution
|
||||||
|
Context Token [I-D.nennemann-wimse-ect] serve complementary and
|
||||||
|
non-overlapping functions. The ECT records workload-level execution
|
||||||
|
in WIMSE terms — which SPIFFE workload executed, in which trust
|
||||||
|
domain, against which service. ACT records the authorization
|
||||||
|
provenance — which agent was permitted to request which action,
|
||||||
|
under what capability constraints, by whose authority — and
|
||||||
|
transitions that authorization record into an execution record upon
|
||||||
|
task completion.
|
||||||
|
|
||||||
|
In mixed environments, both tokens SHOULD be carried simultaneously:
|
||||||
|
the `Workload-Identity` header carries the WIMSE ECT; the
|
||||||
|
`ACT-Record` header carries the ACT. Verifiers MAY correlate the
|
||||||
|
two by matching the ACT `tid` claim against application-layer
|
||||||
|
identifiers present in the ECT's task context. Neither token is a
|
||||||
|
profile or extension of the other; they operate at different
|
||||||
|
abstraction layers and their co-presence is additive.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Informative References to Add
|
||||||
|
|
||||||
|
```
|
||||||
|
[MCP-SPEC] Model Context Protocol Specification, 2025-11-25,
|
||||||
|
<https://modelcontextprotocol.io/specification/2025-11-25>
|
||||||
|
|
||||||
|
[OPENAI-AGENTS-SDK] OpenAI, "Agents SDK",
|
||||||
|
<https://openai.github.io/openai-agents-python/>
|
||||||
|
|
||||||
|
[LANGGRAPH] LangChain, "LangGraph Documentation",
|
||||||
|
<https://langchain-ai.github.io/langgraph/>
|
||||||
|
|
||||||
|
[A2A-SPEC] Google, "Agent2Agent (A2A) Protocol",
|
||||||
|
<https://github.com/a2aproject/A2A>
|
||||||
|
|
||||||
|
[CREWAI] CrewAI, "CrewAI Documentation",
|
||||||
|
<https://docs.crewai.com/>
|
||||||
|
|
||||||
|
[AUTOGEN] Microsoft, "AutoGen Documentation",
|
||||||
|
<https://microsoft.github.io/autogen/>
|
||||||
|
```
|
||||||
1866
workspace/packages/act/draft-nennemann-act-01.md
Normal file
1866
workspace/packages/act/draft-nennemann-act-01.md
Normal file
File diff suppressed because it is too large
Load Diff
23
workspace/packages/act/pyproject.toml
Normal file
23
workspace/packages/act/pyproject.toml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
[build-system]
|
||||||
|
requires = ["setuptools>=68.0"]
|
||||||
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
|
[project]
|
||||||
|
name = "ietf-act"
|
||||||
|
version = "0.1.0"
|
||||||
|
description = "Agent Context Token (ACT) — JWT-based authorization and execution accountability for AI agents"
|
||||||
|
requires-python = ">=3.11"
|
||||||
|
dependencies = [
|
||||||
|
"cryptography>=42.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
dev = [
|
||||||
|
"pytest>=8.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.setuptools.packages.find]
|
||||||
|
where = ["."]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
testpaths = ["tests"]
|
||||||
145
workspace/packages/act/tests/test_crypto.py
Normal file
145
workspace/packages/act/tests/test_crypto.py
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
"""Tests for act.crypto module."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.crypto import (
|
||||||
|
ACTKeyResolver,
|
||||||
|
KeyRegistry,
|
||||||
|
X509TrustStore,
|
||||||
|
b64url_sha256,
|
||||||
|
compute_sha256,
|
||||||
|
did_key_from_ed25519,
|
||||||
|
generate_ed25519_keypair,
|
||||||
|
generate_p256_keypair,
|
||||||
|
resolve_did_key,
|
||||||
|
sign,
|
||||||
|
verify,
|
||||||
|
)
|
||||||
|
from act.errors import ACTKeyResolutionError, ACTSignatureError
|
||||||
|
|
||||||
|
|
||||||
|
class TestEd25519:
|
||||||
|
def test_generate_keypair(self):
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
assert priv is not None
|
||||||
|
assert pub is not None
|
||||||
|
|
||||||
|
def test_sign_verify(self):
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
data = b"test data"
|
||||||
|
sig = sign(priv, data)
|
||||||
|
verify(pub, sig, data)
|
||||||
|
|
||||||
|
def test_verify_wrong_data(self):
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
sig = sign(priv, b"correct data")
|
||||||
|
with pytest.raises(ACTSignatureError):
|
||||||
|
verify(pub, sig, b"wrong data")
|
||||||
|
|
||||||
|
def test_verify_wrong_key(self):
|
||||||
|
priv1, pub1 = generate_ed25519_keypair()
|
||||||
|
_, pub2 = generate_ed25519_keypair()
|
||||||
|
sig = sign(priv1, b"data")
|
||||||
|
with pytest.raises(ACTSignatureError):
|
||||||
|
verify(pub2, sig, b"data")
|
||||||
|
|
||||||
|
|
||||||
|
class TestP256:
|
||||||
|
def test_generate_keypair(self):
|
||||||
|
priv, pub = generate_p256_keypair()
|
||||||
|
assert priv is not None
|
||||||
|
assert pub is not None
|
||||||
|
|
||||||
|
def test_sign_verify(self):
|
||||||
|
priv, pub = generate_p256_keypair()
|
||||||
|
data = b"test data for p256"
|
||||||
|
sig = sign(priv, data)
|
||||||
|
assert len(sig) == 64 # r||s, 32 bytes each
|
||||||
|
verify(pub, sig, data)
|
||||||
|
|
||||||
|
def test_verify_wrong_data(self):
|
||||||
|
priv, pub = generate_p256_keypair()
|
||||||
|
sig = sign(priv, b"correct")
|
||||||
|
with pytest.raises(ACTSignatureError):
|
||||||
|
verify(pub, sig, b"wrong")
|
||||||
|
|
||||||
|
|
||||||
|
class TestSHA256:
|
||||||
|
def test_compute(self):
|
||||||
|
h = compute_sha256(b"hello")
|
||||||
|
assert len(h) == 32
|
||||||
|
|
||||||
|
def test_b64url(self):
|
||||||
|
result = b64url_sha256(b"hello world")
|
||||||
|
assert "=" not in result
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
|
||||||
|
class TestKeyRegistry:
|
||||||
|
def test_register_and_get(self):
|
||||||
|
reg = KeyRegistry()
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
reg.register("key-1", pub)
|
||||||
|
assert reg.get("key-1") is pub
|
||||||
|
assert "key-1" in reg
|
||||||
|
assert len(reg) == 1
|
||||||
|
|
||||||
|
def test_missing_key(self):
|
||||||
|
reg = KeyRegistry()
|
||||||
|
assert reg.get("missing") is None
|
||||||
|
assert "missing" not in reg
|
||||||
|
|
||||||
|
|
||||||
|
class TestDIDKey:
|
||||||
|
def test_ed25519_roundtrip(self):
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
did = did_key_from_ed25519(pub)
|
||||||
|
assert did.startswith("did:key:z6Mk")
|
||||||
|
resolved = resolve_did_key(did)
|
||||||
|
# Verify same key by signing/verifying
|
||||||
|
from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat
|
||||||
|
original_bytes = pub.public_bytes(Encoding.Raw, PublicFormat.Raw)
|
||||||
|
resolved_bytes = resolved.public_bytes(Encoding.Raw, PublicFormat.Raw)
|
||||||
|
assert original_bytes == resolved_bytes
|
||||||
|
|
||||||
|
def test_invalid_prefix(self):
|
||||||
|
with pytest.raises(ACTKeyResolutionError):
|
||||||
|
resolve_did_key("did:web:example.com")
|
||||||
|
|
||||||
|
def test_with_fragment(self):
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
did = did_key_from_ed25519(pub)
|
||||||
|
did_with_fragment = f"{did}#{did.split(':')[2]}"
|
||||||
|
resolved = resolve_did_key(did_with_fragment)
|
||||||
|
assert resolved is not None
|
||||||
|
|
||||||
|
|
||||||
|
class TestACTKeyResolver:
|
||||||
|
def test_tier1_resolution(self):
|
||||||
|
reg = KeyRegistry()
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
reg.register("my-key", pub)
|
||||||
|
resolver = ACTKeyResolver(registry=reg)
|
||||||
|
assert resolver.resolve("my-key") is pub
|
||||||
|
|
||||||
|
def test_tier3_did_key(self):
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
did = did_key_from_ed25519(pub)
|
||||||
|
resolver = ACTKeyResolver()
|
||||||
|
resolved = resolver.resolve(did)
|
||||||
|
assert resolved is not None
|
||||||
|
|
||||||
|
def test_unresolvable(self):
|
||||||
|
resolver = ACTKeyResolver()
|
||||||
|
with pytest.raises(ACTKeyResolutionError):
|
||||||
|
resolver.resolve("unknown-kid")
|
||||||
|
|
||||||
|
def test_did_web_resolver_callback(self):
|
||||||
|
_, pub = generate_ed25519_keypair()
|
||||||
|
def resolver_cb(did: str):
|
||||||
|
if did == "did:web:example.com":
|
||||||
|
return pub
|
||||||
|
return None
|
||||||
|
resolver = ACTKeyResolver(did_web_resolver=resolver_cb)
|
||||||
|
result = resolver.resolve("did:web:example.com")
|
||||||
|
assert result is pub
|
||||||
103
workspace/packages/act/tests/test_dag.py
Normal file
103
workspace/packages/act/tests/test_dag.py
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
"""Tests for act.dag module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.dag import validate_dag
|
||||||
|
from act.errors import ACTCapabilityError, ACTDAGError
|
||||||
|
from act.ledger import ACTLedger
|
||||||
|
from act.token import ACTRecord, Capability, TaskClaim
|
||||||
|
|
||||||
|
|
||||||
|
def make_record(jti, pred=None, exec_act="do.thing", exec_ts=None, cap=None):
|
||||||
|
"""Helper to create a minimal ACTRecord."""
|
||||||
|
return ACTRecord(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900,
|
||||||
|
jti=jti,
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
cap=cap or [Capability(action="do.thing")],
|
||||||
|
exec_act=exec_act,
|
||||||
|
pred=pred or [],
|
||||||
|
exec_ts=exec_ts or 1772064100,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestDAGValidation:
|
||||||
|
def test_root_task(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("root-1")
|
||||||
|
validate_dag(r, ledger)
|
||||||
|
|
||||||
|
def test_child_with_parent(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
parent = make_record("parent-1", exec_ts=1772064050)
|
||||||
|
ledger.append(parent)
|
||||||
|
child = make_record("child-1", pred=["parent-1"], exec_ts=1772064100)
|
||||||
|
validate_dag(child, ledger)
|
||||||
|
|
||||||
|
def test_fan_in(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
p1 = make_record("p1", exec_ts=1772064050)
|
||||||
|
p2 = make_record("p2", exec_ts=1772064060)
|
||||||
|
ledger.append(p1)
|
||||||
|
ledger.append(p2)
|
||||||
|
child = make_record("child", pred=["p1", "p2"], exec_ts=1772064100)
|
||||||
|
validate_dag(child, ledger)
|
||||||
|
|
||||||
|
def test_duplicate_jti(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("dup-1")
|
||||||
|
ledger.append(r)
|
||||||
|
r2 = make_record("dup-1")
|
||||||
|
with pytest.raises(ACTDAGError, match="Duplicate"):
|
||||||
|
validate_dag(r2, ledger)
|
||||||
|
|
||||||
|
def test_missing_parent(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("orphan", pred=["nonexistent"])
|
||||||
|
with pytest.raises(ACTDAGError, match="not found"):
|
||||||
|
validate_dag(r, ledger)
|
||||||
|
|
||||||
|
def test_self_cycle(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("cycle", pred=["cycle"])
|
||||||
|
with pytest.raises(ACTDAGError, match="cycle"):
|
||||||
|
validate_dag(r, ledger)
|
||||||
|
|
||||||
|
def test_indirect_cycle(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
# a -> b -> a would be a cycle
|
||||||
|
a = make_record("a", pred=["b"], exec_ts=1772064100)
|
||||||
|
b = make_record("b", pred=["a"], exec_ts=1772064100)
|
||||||
|
ledger.append(b)
|
||||||
|
# When validating a, following pred leads to b,
|
||||||
|
# which has pred=["a"] — cycle!
|
||||||
|
with pytest.raises(ACTDAGError, match="cycle"):
|
||||||
|
validate_dag(a, ledger)
|
||||||
|
|
||||||
|
def test_temporal_ordering_violation(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
parent = make_record("parent", exec_ts=1772064200)
|
||||||
|
ledger.append(parent)
|
||||||
|
# Child's exec_ts is way before parent
|
||||||
|
child = make_record("child", pred=["parent"], exec_ts=1772064100)
|
||||||
|
with pytest.raises(ACTDAGError, match="Temporal"):
|
||||||
|
validate_dag(child, ledger)
|
||||||
|
|
||||||
|
def test_temporal_within_tolerance(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
parent = make_record("parent", exec_ts=1772064120)
|
||||||
|
ledger.append(parent)
|
||||||
|
# Child exec_ts is slightly before parent but within 30s tolerance
|
||||||
|
child = make_record("child", pred=["parent"], exec_ts=1772064100)
|
||||||
|
validate_dag(child, ledger)
|
||||||
|
|
||||||
|
def test_bad_exec_act(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("bad", exec_act="not.authorized",
|
||||||
|
cap=[Capability(action="do.thing")])
|
||||||
|
with pytest.raises(ACTCapabilityError):
|
||||||
|
validate_dag(r, ledger)
|
||||||
229
workspace/packages/act/tests/test_delegation.py
Normal file
229
workspace/packages/act/tests/test_delegation.py
Normal file
@@ -0,0 +1,229 @@
|
|||||||
|
"""Tests for act.delegation module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.crypto import generate_ed25519_keypair, sign, verify, compute_sha256
|
||||||
|
from act.delegation import (
|
||||||
|
create_delegated_mandate,
|
||||||
|
verify_capability_subset,
|
||||||
|
verify_delegation_chain,
|
||||||
|
)
|
||||||
|
from act.errors import (
|
||||||
|
ACTDelegationError,
|
||||||
|
ACTPrivilegeEscalationError,
|
||||||
|
)
|
||||||
|
from act.token import (
|
||||||
|
ACTMandate,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
DelegationEntry,
|
||||||
|
TaskClaim,
|
||||||
|
_b64url_decode,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def parent_setup():
|
||||||
|
iss_priv, iss_pub = generate_ed25519_keypair()
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-a", sub="agent-b", aud="agent-b",
|
||||||
|
iat=1772064000, exp=1772064900,
|
||||||
|
jti="parent-jti-1",
|
||||||
|
task=TaskClaim(purpose="parent_task"),
|
||||||
|
cap=[
|
||||||
|
Capability(action="read.data", constraints={"max_records": 10}),
|
||||||
|
Capability(action="write.result"),
|
||||||
|
],
|
||||||
|
delegation=Delegation(depth=0, max_depth=3, chain=[]),
|
||||||
|
)
|
||||||
|
sig = sign(iss_priv, mandate.signing_input())
|
||||||
|
compact = encode_jws(mandate, sig)
|
||||||
|
return mandate, compact, iss_priv, iss_pub
|
||||||
|
|
||||||
|
|
||||||
|
class TestCreateDelegatedMandate:
|
||||||
|
def test_basic_delegation(self, parent_setup):
|
||||||
|
mandate, compact, priv, _ = parent_setup
|
||||||
|
delegated, _ = create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact=compact,
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="agent-c", kid="key-b", iss="agent-a", aud="agent-c",
|
||||||
|
iat=1772064010, exp=1772064600,
|
||||||
|
jti="child-jti-1",
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="child_task"),
|
||||||
|
)
|
||||||
|
assert delegated.delegation.depth == 1
|
||||||
|
assert len(delegated.delegation.chain) == 1
|
||||||
|
assert delegated.delegation.chain[0].delegator == "agent-a"
|
||||||
|
|
||||||
|
def test_depth_exceeded(self, parent_setup):
|
||||||
|
mandate, compact, priv, _ = parent_setup
|
||||||
|
# Set parent to max depth
|
||||||
|
mandate.delegation = Delegation(depth=3, max_depth=3, chain=[
|
||||||
|
DelegationEntry(delegator="x", jti="j", sig="s")
|
||||||
|
for _ in range(3)
|
||||||
|
])
|
||||||
|
with pytest.raises(ACTDelegationError, match="exceeds max_depth"):
|
||||||
|
create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact=compact,
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="c", kid="k", iss="a", aud="c",
|
||||||
|
iat=1, exp=2, jti="j",
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_no_del_claim(self):
|
||||||
|
priv, _ = generate_ed25519_keypair()
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1, exp=2,
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
cap=[Capability(action="x.y")],
|
||||||
|
delegation=None, # no del claim
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTDelegationError, match="not permitted"):
|
||||||
|
create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact="compact",
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="c", kid="k", iss="a", aud="c",
|
||||||
|
iat=1, exp=2, jti="j",
|
||||||
|
cap=[Capability(action="x.y")],
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_max_depth_reduction(self, parent_setup):
|
||||||
|
mandate, compact, priv, _ = parent_setup
|
||||||
|
delegated, _ = create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact=compact,
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="c", kid="k", iss="a", aud="c",
|
||||||
|
iat=1, exp=2, jti="j",
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
max_depth=2,
|
||||||
|
)
|
||||||
|
assert delegated.delegation.max_depth == 2
|
||||||
|
|
||||||
|
def test_max_depth_escalation(self, parent_setup):
|
||||||
|
mandate, compact, priv, _ = parent_setup
|
||||||
|
with pytest.raises(ACTDelegationError, match="exceeds parent max_depth"):
|
||||||
|
create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact=compact,
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="c", kid="k", iss="a", aud="c",
|
||||||
|
iat=1, exp=2, jti="j",
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
max_depth=10,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestCapabilitySubset:
|
||||||
|
def test_valid_subset(self):
|
||||||
|
parent = [Capability(action="read.data", constraints={"max_records": 10})]
|
||||||
|
child = [Capability(action="read.data", constraints={"max_records": 5})]
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_extra_action(self):
|
||||||
|
parent = [Capability(action="read.data")]
|
||||||
|
child = [Capability(action="delete.data")]
|
||||||
|
with pytest.raises(ACTPrivilegeEscalationError):
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_numeric_escalation(self):
|
||||||
|
parent = [Capability(action="read.data", constraints={"max_records": 10})]
|
||||||
|
child = [Capability(action="read.data", constraints={"max_records": 100})]
|
||||||
|
with pytest.raises(ACTPrivilegeEscalationError):
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_sensitivity_escalation(self):
|
||||||
|
parent = [Capability(action="read.data",
|
||||||
|
constraints={"data_sensitivity": "confidential"})]
|
||||||
|
child = [Capability(action="read.data",
|
||||||
|
constraints={"data_sensitivity": "internal"})]
|
||||||
|
with pytest.raises(ACTPrivilegeEscalationError):
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_sensitivity_more_restrictive(self):
|
||||||
|
parent = [Capability(action="read.data",
|
||||||
|
constraints={"data_sensitivity": "internal"})]
|
||||||
|
child = [Capability(action="read.data",
|
||||||
|
constraints={"data_sensitivity": "restricted"})]
|
||||||
|
verify_capability_subset(parent, child) # should pass
|
||||||
|
|
||||||
|
def test_missing_constraint(self):
|
||||||
|
parent = [Capability(action="read.data",
|
||||||
|
constraints={"max_records": 10, "scope": "local"})]
|
||||||
|
child = [Capability(action="read.data",
|
||||||
|
constraints={"max_records": 5})]
|
||||||
|
with pytest.raises(ACTPrivilegeEscalationError, match="missing"):
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_domain_specific_identical(self):
|
||||||
|
parent = [Capability(action="read.data",
|
||||||
|
constraints={"custom": "value_a"})]
|
||||||
|
child = [Capability(action="read.data",
|
||||||
|
constraints={"custom": "value_a"})]
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
def test_domain_specific_different(self):
|
||||||
|
parent = [Capability(action="read.data",
|
||||||
|
constraints={"custom": "value_a"})]
|
||||||
|
child = [Capability(action="read.data",
|
||||||
|
constraints={"custom": "value_b"})]
|
||||||
|
with pytest.raises(ACTPrivilegeEscalationError, match="identical"):
|
||||||
|
verify_capability_subset(parent, child)
|
||||||
|
|
||||||
|
|
||||||
|
class TestVerifyDelegationChain:
|
||||||
|
def test_chain_sig_verification(self, parent_setup):
|
||||||
|
mandate, compact, priv, pub = parent_setup
|
||||||
|
delegated, _ = create_delegated_mandate(
|
||||||
|
parent_mandate=mandate, parent_compact=compact,
|
||||||
|
delegator_private_key=priv,
|
||||||
|
sub="c", kid="k", iss="agent-a", aud="c",
|
||||||
|
iat=1, exp=2, jti="j",
|
||||||
|
cap=[Capability(action="read.data", constraints={"max_records": 5})],
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify the chain
|
||||||
|
def resolve_key(delegator_id):
|
||||||
|
return pub
|
||||||
|
|
||||||
|
def resolve_compact(jti):
|
||||||
|
if jti == "parent-jti-1":
|
||||||
|
return compact
|
||||||
|
return None
|
||||||
|
|
||||||
|
verify_delegation_chain(delegated, resolve_key, resolve_compact)
|
||||||
|
|
||||||
|
def test_no_delegation(self):
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1, exp=2,
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
verify_delegation_chain(mandate, lambda x: None) # no-op
|
||||||
|
|
||||||
|
def test_depth_exceeds_max(self):
|
||||||
|
mandate = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1, exp=2,
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
cap=[Capability(action="x.y")],
|
||||||
|
delegation=Delegation(depth=5, max_depth=3, chain=[
|
||||||
|
DelegationEntry(delegator="x", jti="j", sig="s")
|
||||||
|
for _ in range(5)
|
||||||
|
]),
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTDelegationError, match="exceeds"):
|
||||||
|
verify_delegation_chain(mandate, lambda x: None)
|
||||||
84
workspace/packages/act/tests/test_ledger.py
Normal file
84
workspace/packages/act/tests/test_ledger.py
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
"""Tests for act.ledger module."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.errors import ACTLedgerImmutabilityError
|
||||||
|
from act.ledger import ACTLedger
|
||||||
|
from act.token import ACTRecord, Capability, TaskClaim
|
||||||
|
|
||||||
|
|
||||||
|
def make_record(jti, wid=None):
|
||||||
|
return ACTRecord(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=1772064000, exp=1772064900,
|
||||||
|
jti=jti, wid=wid,
|
||||||
|
task=TaskClaim(purpose="t"),
|
||||||
|
cap=[Capability(action="do.thing")],
|
||||||
|
exec_act="do.thing", pred=[], exec_ts=1772064100,
|
||||||
|
status="completed",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestACTLedger:
|
||||||
|
def test_append_and_get(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
r = make_record("jti-1")
|
||||||
|
seq = ledger.append(r)
|
||||||
|
assert seq == 0
|
||||||
|
assert ledger.get("jti-1") is r
|
||||||
|
|
||||||
|
def test_sequential_ordering(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
for i in range(5):
|
||||||
|
seq = ledger.append(make_record(f"jti-{i}"))
|
||||||
|
assert seq == i
|
||||||
|
|
||||||
|
def test_duplicate_rejected(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
ledger.append(make_record("jti-1"))
|
||||||
|
with pytest.raises(ACTLedgerImmutabilityError):
|
||||||
|
ledger.append(make_record("jti-1"))
|
||||||
|
|
||||||
|
def test_get_missing(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
assert ledger.get("missing") is None
|
||||||
|
|
||||||
|
def test_list_all(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
ledger.append(make_record("a"))
|
||||||
|
ledger.append(make_record("b"))
|
||||||
|
records = ledger.list()
|
||||||
|
assert len(records) == 2
|
||||||
|
|
||||||
|
def test_list_by_wid(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
ledger.append(make_record("a", wid="w1"))
|
||||||
|
ledger.append(make_record("b", wid="w2"))
|
||||||
|
ledger.append(make_record("c", wid="w1"))
|
||||||
|
assert len(ledger.list("w1")) == 2
|
||||||
|
assert len(ledger.list("w2")) == 1
|
||||||
|
assert len(ledger.list("w3")) == 0
|
||||||
|
|
||||||
|
def test_verify_integrity_empty(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
assert ledger.verify_integrity() is True
|
||||||
|
|
||||||
|
def test_verify_integrity_with_records(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
for i in range(10):
|
||||||
|
ledger.append(make_record(f"jti-{i}"))
|
||||||
|
assert ledger.verify_integrity() is True
|
||||||
|
|
||||||
|
def test_verify_integrity_tampered(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
ledger.append(make_record("jti-1"))
|
||||||
|
ledger.append(make_record("jti-2"))
|
||||||
|
# Tamper with chain hash
|
||||||
|
ledger._chain_hashes[0] = b"\x00" * 32
|
||||||
|
assert ledger.verify_integrity() is False
|
||||||
|
|
||||||
|
def test_len(self):
|
||||||
|
ledger = ACTLedger()
|
||||||
|
assert len(ledger) == 0
|
||||||
|
ledger.append(make_record("a"))
|
||||||
|
assert len(ledger) == 1
|
||||||
103
workspace/packages/act/tests/test_lifecycle.py
Normal file
103
workspace/packages/act/tests/test_lifecycle.py
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
"""Tests for act.lifecycle module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.crypto import generate_ed25519_keypair, sign
|
||||||
|
from act.errors import ACTCapabilityError, ACTPhaseError
|
||||||
|
from act.lifecycle import transition_to_record
|
||||||
|
from act.token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
ErrorClaim,
|
||||||
|
TaskClaim,
|
||||||
|
decode_jws,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
from act.crypto import verify
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def keys():
|
||||||
|
iss_priv, iss_pub = generate_ed25519_keypair()
|
||||||
|
sub_priv, sub_pub = generate_ed25519_keypair()
|
||||||
|
return iss_priv, iss_pub, sub_priv, sub_pub
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mandate(keys):
|
||||||
|
iss_priv, _, _, _ = keys
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-a", sub="agent-b", aud="agent-b",
|
||||||
|
iat=1772064000, exp=1772064900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="test"),
|
||||||
|
cap=[Capability(action="read.data"), Capability(action="write.result")],
|
||||||
|
delegation=Delegation(depth=0, max_depth=2, chain=[]),
|
||||||
|
)
|
||||||
|
return m
|
||||||
|
|
||||||
|
|
||||||
|
class TestTransitionToRecord:
|
||||||
|
def test_basic_transition(self, mandate, keys):
|
||||||
|
_, _, sub_priv, sub_pub = keys
|
||||||
|
record, compact = transition_to_record(
|
||||||
|
mandate, sub_kid="sub-key", sub_private_key=sub_priv,
|
||||||
|
exec_act="read.data", pred=[], status="completed",
|
||||||
|
)
|
||||||
|
assert isinstance(record, ACTRecord)
|
||||||
|
assert record.exec_act == "read.data"
|
||||||
|
assert record.kid == "sub-key"
|
||||||
|
assert record.iss == mandate.iss # preserved
|
||||||
|
# Verify signature
|
||||||
|
_, _, sig, si = decode_jws(compact)
|
||||||
|
verify(sub_pub, sig, si)
|
||||||
|
|
||||||
|
def test_with_hashes(self, mandate, keys):
|
||||||
|
_, _, sub_priv, _ = keys
|
||||||
|
record, _ = transition_to_record(
|
||||||
|
mandate, sub_kid="k", sub_private_key=sub_priv,
|
||||||
|
exec_act="write.result", pred=[], status="completed",
|
||||||
|
inp_hash="abc", out_hash="def",
|
||||||
|
)
|
||||||
|
assert record.inp_hash == "abc"
|
||||||
|
assert record.out_hash == "def"
|
||||||
|
|
||||||
|
def test_with_error(self, mandate, keys):
|
||||||
|
_, _, sub_priv, _ = keys
|
||||||
|
record, _ = transition_to_record(
|
||||||
|
mandate, sub_kid="k", sub_private_key=sub_priv,
|
||||||
|
exec_act="read.data", pred=[], status="failed",
|
||||||
|
err=ErrorClaim(code="timeout", detail="request timed out"),
|
||||||
|
)
|
||||||
|
assert record.status == "failed"
|
||||||
|
assert record.err is not None
|
||||||
|
assert record.err.code == "timeout"
|
||||||
|
|
||||||
|
def test_rejects_bad_exec_act(self, mandate, keys):
|
||||||
|
_, _, sub_priv, _ = keys
|
||||||
|
with pytest.raises(ACTCapabilityError):
|
||||||
|
transition_to_record(
|
||||||
|
mandate, sub_kid="k", sub_private_key=sub_priv,
|
||||||
|
exec_act="delete.everything", pred=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_preserves_phase1_claims(self, mandate, keys):
|
||||||
|
_, _, sub_priv, _ = keys
|
||||||
|
record, _ = transition_to_record(
|
||||||
|
mandate, sub_kid="k", sub_private_key=sub_priv,
|
||||||
|
exec_act="read.data", pred=[], status="completed",
|
||||||
|
)
|
||||||
|
assert record.iss == mandate.iss
|
||||||
|
assert record.sub == mandate.sub
|
||||||
|
assert record.aud == mandate.aud
|
||||||
|
assert record.iat == mandate.iat
|
||||||
|
assert record.exp == mandate.exp
|
||||||
|
assert record.jti == mandate.jti
|
||||||
|
assert record.task == mandate.task
|
||||||
|
assert record.cap == mandate.cap
|
||||||
244
workspace/packages/act/tests/test_token.py
Normal file
244
workspace/packages/act/tests/test_token.py
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
"""Tests for act.token module."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
DelegationEntry,
|
||||||
|
ErrorClaim,
|
||||||
|
Oversight,
|
||||||
|
TaskClaim,
|
||||||
|
_b64url_decode,
|
||||||
|
_b64url_encode,
|
||||||
|
decode_jws,
|
||||||
|
encode_jws,
|
||||||
|
parse_token,
|
||||||
|
validate_action_name,
|
||||||
|
)
|
||||||
|
from act.errors import ACTPhaseError, ACTValidationError
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def base_time():
|
||||||
|
return 1772064000
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mandate(base_time):
|
||||||
|
return ACTMandate(
|
||||||
|
alg="EdDSA",
|
||||||
|
kid="test-key",
|
||||||
|
iss="agent-a",
|
||||||
|
sub="agent-b",
|
||||||
|
aud="agent-b",
|
||||||
|
iat=base_time,
|
||||||
|
exp=base_time + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="test_task"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestBase64url:
|
||||||
|
def test_roundtrip(self):
|
||||||
|
data = b"hello world"
|
||||||
|
assert _b64url_decode(_b64url_encode(data)) == data
|
||||||
|
|
||||||
|
def test_no_padding(self):
|
||||||
|
encoded = _b64url_encode(b"test")
|
||||||
|
assert "=" not in encoded
|
||||||
|
|
||||||
|
|
||||||
|
class TestActionNameValidation:
|
||||||
|
def test_valid_simple(self):
|
||||||
|
validate_action_name("read")
|
||||||
|
|
||||||
|
def test_valid_dotted(self):
|
||||||
|
validate_action_name("read.data")
|
||||||
|
|
||||||
|
def test_valid_with_hyphens(self):
|
||||||
|
validate_action_name("read-write.data_item")
|
||||||
|
|
||||||
|
def test_invalid_starts_with_digit(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
validate_action_name("1read")
|
||||||
|
|
||||||
|
def test_invalid_empty(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
validate_action_name("")
|
||||||
|
|
||||||
|
def test_invalid_double_dot(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
validate_action_name("read..data")
|
||||||
|
|
||||||
|
|
||||||
|
class TestTaskClaim:
|
||||||
|
def test_roundtrip(self):
|
||||||
|
t = TaskClaim(purpose="test", data_sensitivity="restricted")
|
||||||
|
d = t.to_dict()
|
||||||
|
t2 = TaskClaim.from_dict(d)
|
||||||
|
assert t == t2
|
||||||
|
|
||||||
|
def test_missing_purpose(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
TaskClaim.from_dict({})
|
||||||
|
|
||||||
|
|
||||||
|
class TestCapability:
|
||||||
|
def test_roundtrip(self):
|
||||||
|
c = Capability(action="read.data", constraints={"max": 10})
|
||||||
|
d = c.to_dict()
|
||||||
|
c2 = Capability.from_dict(d)
|
||||||
|
assert c == c2
|
||||||
|
|
||||||
|
def test_validates_action(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
Capability(action="")
|
||||||
|
|
||||||
|
|
||||||
|
class TestDelegation:
|
||||||
|
def test_roundtrip(self):
|
||||||
|
d = Delegation(
|
||||||
|
depth=1,
|
||||||
|
max_depth=3,
|
||||||
|
chain=[DelegationEntry(delegator="a", jti="j1", sig="sig1")],
|
||||||
|
)
|
||||||
|
as_dict = d.to_dict()
|
||||||
|
d2 = Delegation.from_dict(as_dict)
|
||||||
|
assert d.depth == d2.depth
|
||||||
|
assert len(d2.chain) == 1
|
||||||
|
|
||||||
|
|
||||||
|
class TestACTMandate:
|
||||||
|
def test_validate_success(self, mandate):
|
||||||
|
mandate.validate()
|
||||||
|
|
||||||
|
def test_validate_missing_iss(self, base_time):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="", sub="b", aud="b",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTValidationError, match="iss"):
|
||||||
|
m.validate()
|
||||||
|
|
||||||
|
def test_validate_forbidden_alg(self, base_time):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="HS256", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
m.validate()
|
||||||
|
|
||||||
|
def test_validate_alg_none(self, base_time):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="none", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
m.validate()
|
||||||
|
|
||||||
|
def test_to_claims_includes_optional(self, base_time):
|
||||||
|
m = ACTMandate(
|
||||||
|
alg="EdDSA", kid="k", iss="a", sub="b", aud="b",
|
||||||
|
iat=base_time, exp=base_time + 900,
|
||||||
|
task=TaskClaim(purpose="t"), cap=[Capability(action="x.y")],
|
||||||
|
wid="w-1",
|
||||||
|
oversight=Oversight(requires_approval_for=["x.y"]),
|
||||||
|
)
|
||||||
|
claims = m.to_claims()
|
||||||
|
assert claims["wid"] == "w-1"
|
||||||
|
assert "oversight" in claims
|
||||||
|
|
||||||
|
def test_is_phase2(self, mandate):
|
||||||
|
assert mandate.is_phase2() is False
|
||||||
|
|
||||||
|
def test_from_claims_rejects_phase2(self):
|
||||||
|
with pytest.raises(ACTPhaseError):
|
||||||
|
ACTMandate.from_claims(
|
||||||
|
{"alg": "EdDSA", "typ": "act+jwt", "kid": "k"},
|
||||||
|
{"exec_act": "x", "iss": "a", "sub": "b", "aud": "b",
|
||||||
|
"iat": 1, "exp": 2, "jti": "j",
|
||||||
|
"task": {"purpose": "t"}, "cap": [{"action": "x"}]},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestACTRecord:
|
||||||
|
def test_from_mandate(self, mandate):
|
||||||
|
r = ACTRecord.from_mandate(
|
||||||
|
mandate, kid="sub-key", exec_act="read.data",
|
||||||
|
pred=[], status="completed",
|
||||||
|
)
|
||||||
|
assert r.iss == mandate.iss
|
||||||
|
assert r.exec_act == "read.data"
|
||||||
|
assert r.kid == "sub-key"
|
||||||
|
|
||||||
|
def test_validate_bad_status(self, mandate):
|
||||||
|
r = ACTRecord.from_mandate(
|
||||||
|
mandate, kid="k", exec_act="read.data",
|
||||||
|
pred=[], exec_ts=mandate.iat + 100, status="invalid",
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTValidationError, match="status"):
|
||||||
|
r.validate()
|
||||||
|
|
||||||
|
def test_is_phase2(self, mandate):
|
||||||
|
r = ACTRecord.from_mandate(
|
||||||
|
mandate, kid="k", exec_act="read.data",
|
||||||
|
pred=[], status="completed",
|
||||||
|
)
|
||||||
|
assert r.is_phase2() is True
|
||||||
|
|
||||||
|
def test_from_claims_rejects_phase1(self):
|
||||||
|
with pytest.raises(ACTPhaseError):
|
||||||
|
ACTRecord.from_claims(
|
||||||
|
{"alg": "EdDSA", "typ": "act+jwt", "kid": "k"},
|
||||||
|
{"iss": "a", "sub": "b", "aud": "b",
|
||||||
|
"iat": 1, "exp": 2, "jti": "j",
|
||||||
|
"task": {"purpose": "t"}, "cap": [{"action": "x"}]},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestJWSSerialization:
|
||||||
|
def test_decode_invalid_parts(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
decode_jws("only.two")
|
||||||
|
|
||||||
|
def test_decode_invalid_header(self):
|
||||||
|
with pytest.raises(ACTValidationError):
|
||||||
|
decode_jws("!!!.cGF5bG9hZA.c2ln")
|
||||||
|
|
||||||
|
def test_decode_wrong_typ(self):
|
||||||
|
header = _b64url_encode(json.dumps({"alg": "EdDSA", "typ": "jwt", "kid": "k"}).encode())
|
||||||
|
payload = _b64url_encode(json.dumps({"iss": "a"}).encode())
|
||||||
|
sig = _b64url_encode(b"sig")
|
||||||
|
with pytest.raises(ACTValidationError, match="typ"):
|
||||||
|
decode_jws(f"{header}.{payload}.{sig}")
|
||||||
|
|
||||||
|
def test_parse_token_phase1(self, mandate):
|
||||||
|
from act.crypto import generate_ed25519_keypair, sign
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
sig = sign(priv, mandate.signing_input())
|
||||||
|
compact = encode_jws(mandate, sig)
|
||||||
|
parsed = parse_token(compact)
|
||||||
|
assert isinstance(parsed, ACTMandate)
|
||||||
|
|
||||||
|
def test_parse_token_phase2(self, mandate):
|
||||||
|
from act.crypto import generate_ed25519_keypair, sign
|
||||||
|
priv, pub = generate_ed25519_keypair()
|
||||||
|
record = ACTRecord.from_mandate(
|
||||||
|
mandate, kid="k", exec_act="read.data",
|
||||||
|
pred=[], status="completed",
|
||||||
|
)
|
||||||
|
sig = sign(priv, record.signing_input())
|
||||||
|
compact = encode_jws(record, sig)
|
||||||
|
parsed = parse_token(compact)
|
||||||
|
assert isinstance(parsed, ACTRecord)
|
||||||
35
workspace/packages/act/tests/test_vectors.py
Normal file
35
workspace/packages/act/tests/test_vectors.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
"""Tests for act.vectors module — Appendix B test vectors."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.vectors import generate_vectors, validate_vectors
|
||||||
|
|
||||||
|
|
||||||
|
class TestVectorGeneration:
|
||||||
|
def test_generates_15_vectors(self):
|
||||||
|
vectors, ctx = generate_vectors()
|
||||||
|
assert len(vectors) == 15
|
||||||
|
|
||||||
|
def test_vector_ids(self):
|
||||||
|
vectors, _ = generate_vectors()
|
||||||
|
ids = [v.id for v in vectors]
|
||||||
|
expected = [f"B.{i}" for i in range(1, 16)]
|
||||||
|
assert ids == expected
|
||||||
|
|
||||||
|
def test_valid_vectors_have_compact(self):
|
||||||
|
vectors, _ = generate_vectors()
|
||||||
|
for v in vectors:
|
||||||
|
if v.valid and v.id != "B.7":
|
||||||
|
assert v.compact, f"{v.id} should have compact"
|
||||||
|
|
||||||
|
def test_invalid_vectors_have_exception(self):
|
||||||
|
vectors, _ = generate_vectors()
|
||||||
|
for v in vectors:
|
||||||
|
if not v.valid:
|
||||||
|
assert v.expected_exception is not None, \
|
||||||
|
f"{v.id} should have expected_exception"
|
||||||
|
|
||||||
|
|
||||||
|
class TestVectorValidation:
|
||||||
|
def test_all_vectors_pass(self):
|
||||||
|
assert validate_vectors() is True
|
||||||
191
workspace/packages/act/tests/test_verify.py
Normal file
191
workspace/packages/act/tests/test_verify.py
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
"""Tests for act.verify module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from act.crypto import (
|
||||||
|
ACTKeyResolver,
|
||||||
|
KeyRegistry,
|
||||||
|
generate_ed25519_keypair,
|
||||||
|
sign,
|
||||||
|
)
|
||||||
|
from act.errors import (
|
||||||
|
ACTAudienceMismatchError,
|
||||||
|
ACTCapabilityError,
|
||||||
|
ACTExpiredError,
|
||||||
|
ACTPhaseError,
|
||||||
|
ACTSignatureError,
|
||||||
|
ACTValidationError,
|
||||||
|
)
|
||||||
|
from act.ledger import ACTLedger
|
||||||
|
from act.lifecycle import transition_to_record
|
||||||
|
from act.token import (
|
||||||
|
ACTMandate,
|
||||||
|
ACTRecord,
|
||||||
|
Capability,
|
||||||
|
Delegation,
|
||||||
|
TaskClaim,
|
||||||
|
encode_jws,
|
||||||
|
)
|
||||||
|
from act.verify import ACTVerifier
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def setup():
|
||||||
|
iss_priv, iss_pub = generate_ed25519_keypair()
|
||||||
|
sub_priv, sub_pub = generate_ed25519_keypair()
|
||||||
|
registry = KeyRegistry()
|
||||||
|
registry.register("iss-key", iss_pub)
|
||||||
|
registry.register("sub-key", sub_pub)
|
||||||
|
resolver = ACTKeyResolver(registry=registry)
|
||||||
|
base_time = 1772064000
|
||||||
|
return {
|
||||||
|
"iss_priv": iss_priv, "iss_pub": iss_pub,
|
||||||
|
"sub_priv": sub_priv, "sub_pub": sub_pub,
|
||||||
|
"registry": registry, "resolver": resolver,
|
||||||
|
"base_time": base_time,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def make_mandate(setup, **overrides):
|
||||||
|
bt = setup["base_time"]
|
||||||
|
defaults = dict(
|
||||||
|
alg="EdDSA", kid="iss-key",
|
||||||
|
iss="agent-issuer", sub="agent-subject",
|
||||||
|
aud="agent-subject",
|
||||||
|
iat=bt, exp=bt + 900,
|
||||||
|
jti=str(uuid.uuid4()),
|
||||||
|
task=TaskClaim(purpose="test"),
|
||||||
|
cap=[Capability(action="read.data")],
|
||||||
|
)
|
||||||
|
defaults.update(overrides)
|
||||||
|
return ACTMandate(**defaults)
|
||||||
|
|
||||||
|
|
||||||
|
def sign_mandate(mandate, priv_key):
|
||||||
|
sig = sign(priv_key, mandate.signing_input())
|
||||||
|
return encode_jws(mandate, sig)
|
||||||
|
|
||||||
|
|
||||||
|
class TestVerifyMandate:
|
||||||
|
def test_valid_mandate(self, setup):
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
setup["resolver"],
|
||||||
|
verifier_id="agent-subject",
|
||||||
|
trusted_issuers={"agent-issuer"},
|
||||||
|
)
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
result = verifier.verify_mandate(compact, now=setup["base_time"] + 100)
|
||||||
|
assert result.iss == "agent-issuer"
|
||||||
|
|
||||||
|
def test_expired(self, setup):
|
||||||
|
verifier = ACTVerifier(setup["resolver"], verifier_id="agent-subject")
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
with pytest.raises(ACTExpiredError):
|
||||||
|
verifier.verify_mandate(compact, now=setup["base_time"] + 2000)
|
||||||
|
|
||||||
|
def test_wrong_audience(self, setup):
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
setup["resolver"], verifier_id="other-agent",
|
||||||
|
trusted_issuers={"agent-issuer"},
|
||||||
|
)
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
with pytest.raises(ACTAudienceMismatchError):
|
||||||
|
verifier.verify_mandate(
|
||||||
|
compact, now=setup["base_time"] + 100, check_sub=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_untrusted_issuer(self, setup):
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
setup["resolver"], verifier_id="agent-subject",
|
||||||
|
trusted_issuers={"trusted-only"},
|
||||||
|
)
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
with pytest.raises(ACTValidationError, match="not trusted"):
|
||||||
|
verifier.verify_mandate(compact, now=setup["base_time"] + 100)
|
||||||
|
|
||||||
|
def test_signature_failure(self, setup):
|
||||||
|
verifier = ACTVerifier(setup["resolver"], verifier_id="agent-subject")
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
# Tamper with signature
|
||||||
|
parts = compact.split(".")
|
||||||
|
parts[2] = parts[2][:-4] + "XXXX"
|
||||||
|
tampered = ".".join(parts)
|
||||||
|
with pytest.raises(ACTSignatureError):
|
||||||
|
verifier.verify_mandate(tampered, now=setup["base_time"] + 100)
|
||||||
|
|
||||||
|
def test_phase2_as_mandate(self, setup):
|
||||||
|
verifier = ACTVerifier(setup["resolver"])
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
record, compact = transition_to_record(
|
||||||
|
mandate, sub_kid="sub-key", sub_private_key=setup["sub_priv"],
|
||||||
|
exec_act="read.data", pred=[], status="completed",
|
||||||
|
exec_ts=setup["base_time"] + 100,
|
||||||
|
)
|
||||||
|
with pytest.raises(ACTPhaseError):
|
||||||
|
verifier.verify_mandate(compact, now=setup["base_time"] + 100)
|
||||||
|
|
||||||
|
def test_future_iat(self, setup):
|
||||||
|
verifier = ACTVerifier(setup["resolver"], verifier_id="agent-subject")
|
||||||
|
bt = setup["base_time"]
|
||||||
|
mandate = make_mandate(setup, iat=bt + 1000, exp=bt + 2000)
|
||||||
|
compact = sign_mandate(mandate, setup["iss_priv"])
|
||||||
|
with pytest.raises(ACTValidationError, match="future"):
|
||||||
|
verifier.verify_mandate(compact, now=bt)
|
||||||
|
|
||||||
|
|
||||||
|
class TestVerifyRecord:
|
||||||
|
def test_valid_record(self, setup):
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
setup["resolver"],
|
||||||
|
verifier_id="agent-subject",
|
||||||
|
trusted_issuers={"agent-issuer"},
|
||||||
|
)
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
record, compact = transition_to_record(
|
||||||
|
mandate, sub_kid="sub-key", sub_private_key=setup["sub_priv"],
|
||||||
|
exec_act="read.data", pred=[],
|
||||||
|
exec_ts=setup["base_time"] + 100, status="completed",
|
||||||
|
)
|
||||||
|
result = verifier.verify_record(
|
||||||
|
compact, now=setup["base_time"] + 200, check_aud=False,
|
||||||
|
)
|
||||||
|
assert result.exec_act == "read.data"
|
||||||
|
|
||||||
|
def test_wrong_signer(self, setup):
|
||||||
|
verifier = ACTVerifier(setup["resolver"])
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
record = ACTRecord.from_mandate(
|
||||||
|
mandate, kid="sub-key", exec_act="read.data",
|
||||||
|
pred=[], exec_ts=setup["base_time"] + 100, status="completed",
|
||||||
|
)
|
||||||
|
# Sign with iss key instead of sub key
|
||||||
|
sig = sign(setup["iss_priv"], record.signing_input())
|
||||||
|
compact = encode_jws(record, sig)
|
||||||
|
with pytest.raises(ACTSignatureError):
|
||||||
|
verifier.verify_record(compact, now=setup["base_time"] + 200)
|
||||||
|
|
||||||
|
def test_with_dag_validation(self, setup):
|
||||||
|
verifier = ACTVerifier(
|
||||||
|
setup["resolver"], verifier_id="agent-subject",
|
||||||
|
trusted_issuers={"agent-issuer"},
|
||||||
|
)
|
||||||
|
ledger = ACTLedger()
|
||||||
|
mandate = make_mandate(setup)
|
||||||
|
record, compact = transition_to_record(
|
||||||
|
mandate, sub_kid="sub-key", sub_private_key=setup["sub_priv"],
|
||||||
|
exec_act="read.data", pred=[],
|
||||||
|
exec_ts=setup["base_time"] + 100, status="completed",
|
||||||
|
)
|
||||||
|
result = verifier.verify_record(
|
||||||
|
compact, store=ledger,
|
||||||
|
now=setup["base_time"] + 200, check_aud=False,
|
||||||
|
)
|
||||||
|
assert result.status == "completed"
|
||||||
BIN
workspace/packages/ect/.coverage
Normal file
BIN
workspace/packages/ect/.coverage
Normal file
Binary file not shown.
111
workspace/packages/ect/README.md
Normal file
111
workspace/packages/ect/README.md
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
# WIMSE ECT — Python Reference Implementation
|
||||||
|
|
||||||
|
Python reference implementation of [Execution Context Tokens (ECTs)](../../draft-nennemann-wimse-execution-context-01.txt) for WIMSE. Implements ECT creation (ES256), verification (Section 7), DAG validation (Section 6), and an in-memory audit ledger (Section 9).
|
||||||
|
|
||||||
|
## Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
python/
|
||||||
|
├── pyproject.toml
|
||||||
|
├── ect/ # library
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ ├── types.py # Payload, constants
|
||||||
|
│ ├── create.py # create(), generate_key()
|
||||||
|
│ ├── verify.py # parse(), verify(), VerifyOptions
|
||||||
|
│ ├── dag.py # validate_dag(), ECTStore, DAGConfig
|
||||||
|
│ ├── ledger.py # Ledger, MemoryLedger
|
||||||
|
│ ├── config.py # Config, load_config_from_env()
|
||||||
|
│ ├── jti_cache.py # JTICache for replay protection
|
||||||
|
│ └── validate.py # validate_ext, valid_uuid, validate_hash_format
|
||||||
|
├── tests/
|
||||||
|
│ ├── test_create.py
|
||||||
|
│ └── test_dag.py
|
||||||
|
├── testdata/
|
||||||
|
│ └── valid_root_ect_payload.json
|
||||||
|
└── demo.py # two-agent workflow demo
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd refimpl/python && pip install -e .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ect import (
|
||||||
|
Payload,
|
||||||
|
create,
|
||||||
|
generate_key,
|
||||||
|
CreateOptions,
|
||||||
|
verify,
|
||||||
|
VerifyOptions,
|
||||||
|
MemoryLedger,
|
||||||
|
)
|
||||||
|
|
||||||
|
cfg = load_config_from_env()
|
||||||
|
key = generate_key()
|
||||||
|
payload = Payload(
|
||||||
|
iss="spiffe://example.com/agent/a",
|
||||||
|
aud=["spiffe://example.com/agent/b"],
|
||||||
|
iat=int(time.time()),
|
||||||
|
exp=int(time.time()) + 600,
|
||||||
|
jti="550e8400-e29b-41d4-a716-446655440000",
|
||||||
|
exec_act="review_spec",
|
||||||
|
pred=[],
|
||||||
|
ext={
|
||||||
|
"pol": "policy_v1",
|
||||||
|
"pol_decision": "approved",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
compact = create(payload, key, cfg.create_options("agent-a-key"))
|
||||||
|
|
||||||
|
store = MemoryLedger()
|
||||||
|
opts = cfg.verify_options()
|
||||||
|
opts.verifier_id = "spiffe://example.com/agent/b"
|
||||||
|
opts.resolve_key = lambda kid: key.public_key() if kid == "agent-a-key" else None
|
||||||
|
opts.store = store
|
||||||
|
parsed = verify(compact, opts)
|
||||||
|
store.append(compact, parsed.payload)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Demo
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd refimpl/python && python3 demo.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd refimpl/python && python3 -m pytest tests/ -v
|
||||||
|
```
|
||||||
|
|
||||||
|
Unit tests require **90% coverage** minimum (`pytest` is configured with `--cov-fail-under=90` in `pyproject.toml`). Install dev deps: `pip install -e ".[dev]"`. Uncovered lines are mainly abstract base methods and a few verify branches that need manually built tokens.
|
||||||
|
|
||||||
|
## draft-01 claim changes
|
||||||
|
|
||||||
|
| -00 (previous) | -01 (current) | Notes |
|
||||||
|
|----------------|---------------|-------|
|
||||||
|
| `par` | `pred` | Predecessor task IDs |
|
||||||
|
| `pol`, `pol_decision` | removed (use `ect_ext`) | Policy claims moved to extension object |
|
||||||
|
| `sub` | not defined | Standard JWT claim, not part of ECT spec |
|
||||||
|
| `typ: wimse-exec+jwt` | `typ: exec+jwt` (preferred) | Both accepted for backward compat |
|
||||||
|
| `max_par_length` | `max_pred_length` | Renamed to match `pred` claim |
|
||||||
|
|
||||||
|
## Production configuration (environment)
|
||||||
|
|
||||||
|
Same env vars as the Go refimpl: `ECT_IAT_MAX_AGE_MINUTES`, `ECT_IAT_MAX_FUTURE_SEC`, `ECT_DEFAULT_EXPIRY_MIN`, `ECT_JTI_REPLAY_CACHE_SIZE`, `ECT_JTI_REPLAY_TTL_MIN`.
|
||||||
|
|
||||||
|
### Replay cache (multi-instance)
|
||||||
|
|
||||||
|
The provided JTI cache is in-memory only. For multiple verifier instances, use a shared store (Redis, DB) and pass a `jti_seen` callable that checks/records JTIs there. See refimpl/README for an overview.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- PyJWT, cryptography (ES256).
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Same as the Internet-Draft (IETF Trust). Code under Revised BSD per BCP 78/79.
|
||||||
102
workspace/packages/ect/demo.py
Normal file
102
workspace/packages/ect/demo.py
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Two-agent ECT workflow demo: Agent A creates root ECT, Agent B verifies and creates child."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
from ect import (
|
||||||
|
Payload,
|
||||||
|
create,
|
||||||
|
generate_key,
|
||||||
|
CreateOptions,
|
||||||
|
verify,
|
||||||
|
VerifyOptions,
|
||||||
|
MemoryLedger,
|
||||||
|
)
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ledger = MemoryLedger()
|
||||||
|
now = int(time.time())
|
||||||
|
|
||||||
|
key_a = generate_key()
|
||||||
|
agent_a = "spiffe://example.com/agent/spec-reviewer"
|
||||||
|
agent_b = "spiffe://example.com/agent/implementer"
|
||||||
|
kid_a = "agent-a-key"
|
||||||
|
|
||||||
|
# 1) Agent A creates root ECT (task id = jti per spec)
|
||||||
|
root_jti = "550e8400-e29b-41d4-a716-446655440001"
|
||||||
|
payload_a = Payload(
|
||||||
|
iss=agent_a,
|
||||||
|
aud=[agent_b],
|
||||||
|
iat=now,
|
||||||
|
exp=now + 600,
|
||||||
|
jti=root_jti,
|
||||||
|
wid="wf-demo-001",
|
||||||
|
exec_act="review_requirements_spec",
|
||||||
|
pred=[],
|
||||||
|
ext={
|
||||||
|
"pol": "spec_review_policy_v2",
|
||||||
|
"pol_decision": "approved",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
ect_a = create(payload_a, key_a, CreateOptions(key_id=kid_a))
|
||||||
|
print("Agent A created root ECT (jti=550e8400-..., review_requirements_spec)")
|
||||||
|
|
||||||
|
# 2) Agent B verifies
|
||||||
|
def resolve_key(kid):
|
||||||
|
if kid == kid_a:
|
||||||
|
return key_a.public_key()
|
||||||
|
return None
|
||||||
|
|
||||||
|
opts = VerifyOptions(
|
||||||
|
verifier_id=agent_b,
|
||||||
|
resolve_key=resolve_key,
|
||||||
|
store=ledger,
|
||||||
|
now=now,
|
||||||
|
)
|
||||||
|
parsed = verify(ect_a, opts)
|
||||||
|
ledger.append(ect_a, parsed.payload)
|
||||||
|
print("Agent B verified root ECT and appended to ledger")
|
||||||
|
|
||||||
|
# 3) Agent B creates child ECT (pred contains predecessor jti values per spec)
|
||||||
|
key_b = generate_key()
|
||||||
|
kid_b = "agent-b-key"
|
||||||
|
child_jti = "550e8400-e29b-41d4-a716-446655440002"
|
||||||
|
payload_b = Payload(
|
||||||
|
iss=agent_b,
|
||||||
|
aud=["spiffe://example.com/system/ledger"],
|
||||||
|
iat=now + 1,
|
||||||
|
exp=now + 600,
|
||||||
|
jti=child_jti,
|
||||||
|
wid="wf-demo-001",
|
||||||
|
exec_act="implement_module",
|
||||||
|
pred=[root_jti],
|
||||||
|
ext={
|
||||||
|
"pol": "coding_standards_v3",
|
||||||
|
"pol_decision": "approved",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
ect_b = create(payload_b, key_b, CreateOptions(key_id=kid_b))
|
||||||
|
print("Agent B created child ECT (jti=550e8400-...002, implement_module, pred=[predecessor jti])")
|
||||||
|
|
||||||
|
# 4) Verify child ECT with DAG
|
||||||
|
def resolver_b(kid):
|
||||||
|
if kid == kid_b:
|
||||||
|
return key_b.public_key()
|
||||||
|
if kid == kid_a:
|
||||||
|
return key_a.public_key()
|
||||||
|
return None
|
||||||
|
|
||||||
|
opts_b = VerifyOptions(
|
||||||
|
verifier_id="spiffe://example.com/system/ledger",
|
||||||
|
resolve_key=resolver_b,
|
||||||
|
store=ledger,
|
||||||
|
now=now + 2,
|
||||||
|
)
|
||||||
|
parsed_b = verify(ect_b, opts_b)
|
||||||
|
ledger.append(ect_b, parsed_b.payload)
|
||||||
|
print("Verified child ECT with DAG validation and appended to ledger")
|
||||||
|
print(f"Ledger entries: {parsed.payload.jti} ({parsed.payload.exec_act}), {parsed_b.payload.jti} ({parsed_b.payload.exec_act})")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
55
workspace/packages/ect/ect/__init__.py
Normal file
55
workspace/packages/ect/ect/__init__.py
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# WIMSE Execution Context Tokens (ECT) — Python reference implementation
|
||||||
|
# draft-nennemann-wimse-execution-context-01
|
||||||
|
|
||||||
|
from ect.types import (
|
||||||
|
ECT_TYPE,
|
||||||
|
ECT_TYPE_LEGACY,
|
||||||
|
Payload,
|
||||||
|
)
|
||||||
|
from ect.create import create, generate_key, CreateOptions, default_create_options
|
||||||
|
from ect.verify import (
|
||||||
|
ParsedECT,
|
||||||
|
parse,
|
||||||
|
verify,
|
||||||
|
VerifyOptions,
|
||||||
|
default_verify_options,
|
||||||
|
KeyResolver,
|
||||||
|
)
|
||||||
|
from ect.dag import (
|
||||||
|
ECTStore,
|
||||||
|
DAGConfig,
|
||||||
|
default_dag_config,
|
||||||
|
validate_dag,
|
||||||
|
)
|
||||||
|
from ect.ledger import Ledger, MemoryLedger, LedgerEntry, ErrTaskIDExists
|
||||||
|
from ect.config import Config, default_config, load_config_from_env
|
||||||
|
from ect.jti_cache import JTICache, new_jti_cache
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"ECT_TYPE",
|
||||||
|
"ECT_TYPE_LEGACY",
|
||||||
|
"Payload",
|
||||||
|
"create",
|
||||||
|
"generate_key",
|
||||||
|
"CreateOptions",
|
||||||
|
"default_create_options",
|
||||||
|
"ParsedECT",
|
||||||
|
"parse",
|
||||||
|
"verify",
|
||||||
|
"VerifyOptions",
|
||||||
|
"default_verify_options",
|
||||||
|
"KeyResolver",
|
||||||
|
"ECTStore",
|
||||||
|
"DAGConfig",
|
||||||
|
"default_dag_config",
|
||||||
|
"validate_dag",
|
||||||
|
"Ledger",
|
||||||
|
"MemoryLedger",
|
||||||
|
"LedgerEntry",
|
||||||
|
"ErrTaskIDExists",
|
||||||
|
"Config",
|
||||||
|
"default_config",
|
||||||
|
"load_config_from_env",
|
||||||
|
"JTICache",
|
||||||
|
"new_jti_cache",
|
||||||
|
]
|
||||||
61
workspace/packages/ect/ect/config.py
Normal file
61
workspace/packages/ect/ect/config.py
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
"""Production config from environment."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
ENV_IAT_MAX_AGE_MINUTES = "ECT_IAT_MAX_AGE_MINUTES"
|
||||||
|
ENV_IAT_MAX_FUTURE_SEC = "ECT_IAT_MAX_FUTURE_SEC"
|
||||||
|
ENV_DEFAULT_EXPIRY_MIN = "ECT_DEFAULT_EXPIRY_MIN"
|
||||||
|
ENV_JTI_REPLAY_CACHE_SIZE = "ECT_JTI_REPLAY_CACHE_SIZE"
|
||||||
|
ENV_JTI_REPLAY_TTL_MIN = "ECT_JTI_REPLAY_TTL_MIN"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Config:
|
||||||
|
iat_max_age_sec: int = 900
|
||||||
|
iat_max_future_sec: int = 30
|
||||||
|
default_expiry_sec: int = 600
|
||||||
|
jti_replay_size: int = 0
|
||||||
|
jti_replay_ttl_sec: int = 3600
|
||||||
|
|
||||||
|
def create_options(self, key_id: str) -> "CreateOptions":
|
||||||
|
from ect.create import CreateOptions
|
||||||
|
return CreateOptions(
|
||||||
|
key_id=key_id,
|
||||||
|
default_expiry_sec=self.default_expiry_sec,
|
||||||
|
)
|
||||||
|
|
||||||
|
def verify_options(self) -> "VerifyOptions":
|
||||||
|
from ect.verify import VerifyOptions
|
||||||
|
from ect.dag import default_dag_config
|
||||||
|
return VerifyOptions(
|
||||||
|
iat_max_age_sec=self.iat_max_age_sec,
|
||||||
|
iat_max_future_sec=self.iat_max_future_sec,
|
||||||
|
dag=default_dag_config(),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def default_config() -> Config:
|
||||||
|
return Config()
|
||||||
|
|
||||||
|
|
||||||
|
def _int_env(name: str, default: int) -> int:
|
||||||
|
v = os.environ.get(name)
|
||||||
|
if v is None or v == "":
|
||||||
|
return default
|
||||||
|
try:
|
||||||
|
return int(v)
|
||||||
|
except ValueError:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def load_config_from_env() -> Config:
|
||||||
|
c = default_config()
|
||||||
|
c.iat_max_age_sec = _int_env(ENV_IAT_MAX_AGE_MINUTES, 15) * 60
|
||||||
|
c.iat_max_future_sec = _int_env(ENV_IAT_MAX_FUTURE_SEC, 30)
|
||||||
|
c.default_expiry_sec = _int_env(ENV_DEFAULT_EXPIRY_MIN, 10) * 60
|
||||||
|
c.jti_replay_size = _int_env(ENV_JTI_REPLAY_CACHE_SIZE, 0)
|
||||||
|
c.jti_replay_ttl_sec = _int_env(ENV_JTI_REPLAY_TTL_MIN, 60) * 60
|
||||||
|
return c
|
||||||
104
workspace/packages/ect/ect/create.py
Normal file
104
workspace/packages/ect/ect/create.py
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
"""ECT creation: build and sign JWT with ES256."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import copy
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
import jwt
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePrivateKey
|
||||||
|
|
||||||
|
from ect.types import ECT_TYPE, Payload
|
||||||
|
from ect.validate import (
|
||||||
|
DEFAULT_MAX_PRED_LENGTH,
|
||||||
|
validate_ext,
|
||||||
|
validate_hash_format,
|
||||||
|
valid_uuid,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CreateOptions:
|
||||||
|
key_id: str
|
||||||
|
iat_max_age_sec: int = 900 # 15 min
|
||||||
|
default_expiry_sec: int = 600 # 10 min
|
||||||
|
validate_uuids: bool = False
|
||||||
|
max_pred_length: int = 0 # 0 = no limit; use DEFAULT_MAX_PRED_LENGTH for 100
|
||||||
|
|
||||||
|
|
||||||
|
def default_create_options() -> CreateOptions:
|
||||||
|
return CreateOptions(key_id="")
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_payload(p: Payload, opts: CreateOptions) -> None:
|
||||||
|
if not p.iss:
|
||||||
|
raise ValueError("ect: iss required")
|
||||||
|
if not p.aud:
|
||||||
|
raise ValueError("ect: aud required")
|
||||||
|
if not p.jti:
|
||||||
|
raise ValueError("ect: jti required")
|
||||||
|
if not p.exec_act:
|
||||||
|
raise ValueError("ect: exec_act required")
|
||||||
|
if opts.validate_uuids:
|
||||||
|
if not valid_uuid(p.jti):
|
||||||
|
raise ValueError("ect: jti must be UUID format")
|
||||||
|
if p.wid and not valid_uuid(p.wid):
|
||||||
|
raise ValueError("ect: wid must be UUID format when set")
|
||||||
|
max_pred = opts.max_pred_length or 0
|
||||||
|
if max_pred > 0 and len(p.pred) > max_pred:
|
||||||
|
raise ValueError("ect: pred exceeds max length")
|
||||||
|
if p.inp_hash:
|
||||||
|
validate_hash_format(p.inp_hash)
|
||||||
|
if p.out_hash:
|
||||||
|
validate_hash_format(p.out_hash)
|
||||||
|
validate_ext(p.ext)
|
||||||
|
# compensation in ext per spec
|
||||||
|
if p.ext and p.ext.get("compensation_reason") and not p.ext.get("compensation_required"):
|
||||||
|
raise ValueError("ect: ext.compensation_reason requires ext.compensation_required true")
|
||||||
|
|
||||||
|
|
||||||
|
def create(
|
||||||
|
payload: Payload,
|
||||||
|
private_key: EllipticCurvePrivateKey,
|
||||||
|
opts: CreateOptions,
|
||||||
|
) -> str:
|
||||||
|
"""Build and sign an ECT. Payload must have required claims; iat/exp can be 0 for defaults.
|
||||||
|
create() works on a deep copy so the caller's payload is not modified.
|
||||||
|
"""
|
||||||
|
if not opts.key_id:
|
||||||
|
raise ValueError("ect: KeyID required")
|
||||||
|
|
||||||
|
# Work on a copy so we do not mutate the caller's payload.
|
||||||
|
payload = copy.deepcopy(payload)
|
||||||
|
|
||||||
|
now = int(time.time())
|
||||||
|
if payload.iat == 0:
|
||||||
|
payload.iat = now
|
||||||
|
if payload.exp == 0:
|
||||||
|
payload.exp = now + (opts.default_expiry_sec or 600)
|
||||||
|
if payload.pred is None:
|
||||||
|
payload.pred = []
|
||||||
|
|
||||||
|
_validate_payload(payload, opts)
|
||||||
|
|
||||||
|
claims = payload.to_claims()
|
||||||
|
headers = {
|
||||||
|
"typ": ECT_TYPE,
|
||||||
|
"alg": "ES256",
|
||||||
|
"kid": opts.key_id,
|
||||||
|
}
|
||||||
|
return jwt.encode(
|
||||||
|
claims,
|
||||||
|
private_key,
|
||||||
|
algorithm="ES256",
|
||||||
|
headers=headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_key() -> EllipticCurvePrivateKey:
|
||||||
|
"""Create an ECDSA P-256 key for ES256 (testing/demo)."""
|
||||||
|
from cryptography.hazmat.primitives.asymmetric import ec
|
||||||
|
|
||||||
|
return ec.generate_private_key(ec.SECP256R1())
|
||||||
96
workspace/packages/ect/ect/dag.py
Normal file
96
workspace/packages/ect/ect/dag.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
"""DAG validation per Section 6."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from ect.types import Payload
|
||||||
|
|
||||||
|
from ect.validate import DEFAULT_MAX_PRED_LENGTH
|
||||||
|
|
||||||
|
DEFAULT_CLOCK_SKEW_TOLERANCE = 30
|
||||||
|
DEFAULT_MAX_ANCESTOR_LIMIT = 10000
|
||||||
|
|
||||||
|
|
||||||
|
class ECTStore(ABC):
|
||||||
|
"""Lookup of ECTs by task ID for DAG validation."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_by_tid(self, tid: str) -> "Payload | None":
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def contains(self, tid: str, wid: str) -> bool:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class DAGConfig:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
clock_skew_tolerance: int = DEFAULT_CLOCK_SKEW_TOLERANCE,
|
||||||
|
max_ancestor_limit: int = DEFAULT_MAX_ANCESTOR_LIMIT,
|
||||||
|
max_pred_length: int = 0,
|
||||||
|
):
|
||||||
|
self.clock_skew_tolerance = clock_skew_tolerance or DEFAULT_CLOCK_SKEW_TOLERANCE
|
||||||
|
self.max_ancestor_limit = max_ancestor_limit or DEFAULT_MAX_ANCESTOR_LIMIT
|
||||||
|
self.max_pred_length = max_pred_length or 0
|
||||||
|
|
||||||
|
|
||||||
|
def default_dag_config() -> DAGConfig:
|
||||||
|
return DAGConfig()
|
||||||
|
|
||||||
|
|
||||||
|
def _has_cycle(
|
||||||
|
target_tid: str,
|
||||||
|
pred_ids: list[str],
|
||||||
|
store: ECTStore,
|
||||||
|
visited: set[str],
|
||||||
|
max_depth: int,
|
||||||
|
) -> bool:
|
||||||
|
if len(visited) >= max_depth:
|
||||||
|
return True
|
||||||
|
for pred_id in pred_ids:
|
||||||
|
if pred_id == target_tid:
|
||||||
|
return True
|
||||||
|
if pred_id in visited:
|
||||||
|
continue
|
||||||
|
visited.add(pred_id)
|
||||||
|
pred = store.get_by_tid(pred_id)
|
||||||
|
if pred is not None:
|
||||||
|
if _has_cycle(target_tid, pred.pred, store, visited, max_depth):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def validate_dag(
|
||||||
|
payload: "Payload",
|
||||||
|
store: ECTStore,
|
||||||
|
cfg: DAGConfig,
|
||||||
|
) -> None:
|
||||||
|
"""Section 6.2: uniqueness (by jti), predecessor existence, temporal ordering, acyclicity, predecessor policy."""
|
||||||
|
if cfg.max_pred_length > 0 and len(payload.pred) > cfg.max_pred_length:
|
||||||
|
raise ValueError("ect: pred exceeds max length")
|
||||||
|
if store.contains(payload.jti, payload.wid or ""):
|
||||||
|
raise ValueError(f"ect: task ID (jti) already exists: {payload.jti}")
|
||||||
|
|
||||||
|
for pred_id in payload.pred:
|
||||||
|
pred = store.get_by_tid(pred_id)
|
||||||
|
if pred is None:
|
||||||
|
raise ValueError(f"ect: predecessor task not found: {pred_id}")
|
||||||
|
if pred.iat >= payload.iat + cfg.clock_skew_tolerance:
|
||||||
|
raise ValueError(f"ect: predecessor task not earlier than current: {pred_id}")
|
||||||
|
|
||||||
|
visited: set[str] = set()
|
||||||
|
if _has_cycle(payload.jti, payload.pred, store, visited, cfg.max_ancestor_limit):
|
||||||
|
raise ValueError("ect: circular dependency or depth limit exceeded")
|
||||||
|
|
||||||
|
# Predecessor policy decision: only when predecessor has policy claims in ext per -01
|
||||||
|
for pred_id in payload.pred:
|
||||||
|
pred = store.get_by_tid(pred_id)
|
||||||
|
if pred and pred.has_policy_claims() and pred.pol_decision() in ("rejected", "pending_human_review"):
|
||||||
|
if not payload.compensation_required():
|
||||||
|
raise ValueError(
|
||||||
|
"ect: predecessor has non-approved pol_decision; current ECT must be compensation/remediation or have ext.compensation_required true"
|
||||||
|
)
|
||||||
52
workspace/packages/ect/ect/jti_cache.py
Normal file
52
workspace/packages/ect/ect/jti_cache.py
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
"""JTI replay cache for production verification."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
|
|
||||||
|
class JTICache(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
def seen(self, jti: str) -> bool:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def add(self, jti: str) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class _MemoryJTICache(JTICache):
|
||||||
|
def __init__(self, max_size: int, ttl_sec: int) -> None:
|
||||||
|
self._max_size = max_size
|
||||||
|
self._ttl_sec = ttl_sec
|
||||||
|
self._by_jti: dict[str, float] = {}
|
||||||
|
self._lock = threading.RLock()
|
||||||
|
|
||||||
|
def seen(self, jti: str) -> bool:
|
||||||
|
with self._lock:
|
||||||
|
exp = self._by_jti.get(jti)
|
||||||
|
if exp is None:
|
||||||
|
return False
|
||||||
|
if time.time() > exp:
|
||||||
|
del self._by_jti[jti]
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def add(self, jti: str) -> None:
|
||||||
|
with self._lock:
|
||||||
|
now = time.time()
|
||||||
|
for k, exp in list(self._by_jti.items()):
|
||||||
|
if now > exp:
|
||||||
|
del self._by_jti[k]
|
||||||
|
if self._max_size > 0 and len(self._by_jti) >= self._max_size and jti not in self._by_jti:
|
||||||
|
# evict one
|
||||||
|
for k in self._by_jti:
|
||||||
|
del self._by_jti[k]
|
||||||
|
break
|
||||||
|
self._by_jti[jti] = now + self._ttl_sec
|
||||||
|
|
||||||
|
|
||||||
|
def new_jti_cache(max_size: int, ttl_sec: int) -> JTICache:
|
||||||
|
return _MemoryJTICache(max_size, ttl_sec)
|
||||||
97
workspace/packages/ect/ect/ledger.py
Normal file
97
workspace/packages/ect/ect/ledger.py
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
"""Audit ledger per Section 9."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import time
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from ect.types import Payload
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class ErrTaskIDExists(Exception):
|
||||||
|
"""Raised when appending an ECT whose tid already exists."""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LedgerEntry:
|
||||||
|
ledger_sequence: int
|
||||||
|
task_id: str
|
||||||
|
agent_id: str
|
||||||
|
action: str
|
||||||
|
predecessors: list[str]
|
||||||
|
ect_jws: str
|
||||||
|
signature_verified: bool
|
||||||
|
verification_timestamp: float
|
||||||
|
stored_timestamp: float
|
||||||
|
|
||||||
|
|
||||||
|
class Ledger(ABC):
|
||||||
|
"""Append-only audit ledger; lookup by task id (jti)."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def append(self, ect_jws: str, payload: Payload) -> int:
|
||||||
|
"""Returns new ledger sequence number."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_by_tid(self, tid: str) -> Payload | None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def contains(self, tid: str, wid: str) -> bool:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class MemoryLedger(Ledger):
|
||||||
|
"""In-memory append-only ECT store implementing Ledger and ECTStore."""
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self._seq = 0
|
||||||
|
self._by_tid: dict[str, "Payload"] = {}
|
||||||
|
self._entries: list[LedgerEntry] = []
|
||||||
|
self._lock = __import__("threading").Lock()
|
||||||
|
|
||||||
|
def append(self, ect_jws: str, payload: Payload) -> int:
|
||||||
|
if payload is None:
|
||||||
|
return 0
|
||||||
|
with self._lock:
|
||||||
|
wid = payload.wid or ""
|
||||||
|
if self._contains_locked(payload.jti, wid):
|
||||||
|
raise ErrTaskIDExists("ect: task ID (jti) already exists in ledger")
|
||||||
|
self._seq += 1
|
||||||
|
now = time.time()
|
||||||
|
entry = LedgerEntry(
|
||||||
|
ledger_sequence=self._seq,
|
||||||
|
task_id=payload.jti,
|
||||||
|
agent_id=payload.iss,
|
||||||
|
action=payload.exec_act,
|
||||||
|
predecessors=list(payload.pred) if payload.pred else [],
|
||||||
|
ect_jws=ect_jws,
|
||||||
|
signature_verified=True,
|
||||||
|
verification_timestamp=now,
|
||||||
|
stored_timestamp=now,
|
||||||
|
)
|
||||||
|
self._by_tid[payload.jti] = payload
|
||||||
|
self._entries.append(entry)
|
||||||
|
return self._seq
|
||||||
|
|
||||||
|
def get_by_tid(self, tid: str) -> Payload | None:
|
||||||
|
with self._lock:
|
||||||
|
return self._by_tid.get(tid)
|
||||||
|
|
||||||
|
def contains(self, tid: str, wid: str) -> bool:
|
||||||
|
with self._lock:
|
||||||
|
return self._contains_locked(tid, wid)
|
||||||
|
|
||||||
|
def _contains_locked(self, tid: str, wid: str) -> bool:
|
||||||
|
p = self._by_tid.get(tid)
|
||||||
|
if p is None:
|
||||||
|
return False
|
||||||
|
if not wid:
|
||||||
|
return True
|
||||||
|
return (p.wid or "") == wid
|
||||||
106
workspace/packages/ect/ect/types.py
Normal file
106
workspace/packages/ect/ect/types.py
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
"""ECT payload and claim types per draft-nennemann-wimse-ect-01 Section 4."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
# Preferred typ per -01; legacy accepted for backward compatibility.
|
||||||
|
ECT_TYPE = "exec+jwt"
|
||||||
|
ECT_TYPE_LEGACY = "wimse-exec+jwt"
|
||||||
|
|
||||||
|
|
||||||
|
def _audience_serialize(aud: list[str]) -> str | list[str]:
|
||||||
|
if len(aud) == 1:
|
||||||
|
return aud[0]
|
||||||
|
return aud
|
||||||
|
|
||||||
|
|
||||||
|
def _audience_deserialize(raw: Any) -> list[str]:
|
||||||
|
if isinstance(raw, list):
|
||||||
|
return [str(x) for x in raw]
|
||||||
|
if isinstance(raw, str):
|
||||||
|
return [raw]
|
||||||
|
raise ValueError("aud must be string or array of strings")
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Payload:
|
||||||
|
"""ECT JWT claims per Section 4. Task identity is jti only; no separate tid per spec."""
|
||||||
|
|
||||||
|
iss: str
|
||||||
|
aud: list[str]
|
||||||
|
iat: int
|
||||||
|
exp: int
|
||||||
|
jti: str
|
||||||
|
exec_act: str
|
||||||
|
pred: list[str] # predecessor jti values (renamed from par in -01)
|
||||||
|
wid: str = ""
|
||||||
|
inp_hash: str = ""
|
||||||
|
out_hash: str = ""
|
||||||
|
inp_classification: str = ""
|
||||||
|
ext: dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
|
def to_claims(self) -> dict[str, Any]:
|
||||||
|
"""Export as JWT claims. Policy and compensation in ext per -01 spec."""
|
||||||
|
out: dict[str, Any] = {
|
||||||
|
"iss": self.iss,
|
||||||
|
"aud": _audience_serialize(self.aud),
|
||||||
|
"iat": self.iat,
|
||||||
|
"exp": self.exp,
|
||||||
|
"jti": self.jti,
|
||||||
|
"exec_act": self.exec_act,
|
||||||
|
"pred": self.pred,
|
||||||
|
}
|
||||||
|
if self.wid:
|
||||||
|
out["wid"] = self.wid
|
||||||
|
if self.inp_hash:
|
||||||
|
out["inp_hash"] = self.inp_hash
|
||||||
|
if self.out_hash:
|
||||||
|
out["out_hash"] = self.out_hash
|
||||||
|
if self.inp_classification:
|
||||||
|
out["inp_classification"] = self.inp_classification
|
||||||
|
if self.ext:
|
||||||
|
out["ect_ext"] = dict(self.ext)
|
||||||
|
return out
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_claims(cls, claims: dict[str, Any]) -> Payload:
|
||||||
|
"""Build Payload from JWT claims. Policy claims read from ext per -01 spec."""
|
||||||
|
ext = claims.get("ect_ext") or {}
|
||||||
|
return cls(
|
||||||
|
iss=claims["iss"],
|
||||||
|
aud=_audience_deserialize(claims["aud"]),
|
||||||
|
iat=int(claims["iat"]),
|
||||||
|
exp=int(claims["exp"]),
|
||||||
|
jti=claims["jti"],
|
||||||
|
exec_act=claims["exec_act"],
|
||||||
|
pred=claims.get("pred") or [],
|
||||||
|
wid=claims.get("wid", ""),
|
||||||
|
inp_hash=claims.get("inp_hash", ""),
|
||||||
|
out_hash=claims.get("out_hash", ""),
|
||||||
|
inp_classification=claims.get("inp_classification", ""),
|
||||||
|
ext=ext,
|
||||||
|
)
|
||||||
|
|
||||||
|
def contains_audience(self, verifier_id: str) -> bool:
|
||||||
|
return verifier_id in self.aud
|
||||||
|
|
||||||
|
def compensation_required(self) -> bool:
|
||||||
|
"""Per spec: compensation_required is in ext."""
|
||||||
|
if not self.ext:
|
||||||
|
return False
|
||||||
|
return bool(self.ext.get("compensation_required"))
|
||||||
|
|
||||||
|
def has_policy_claims(self) -> bool:
|
||||||
|
"""True if both pol and pol_decision are present in ext (per -01, moved to extension)."""
|
||||||
|
if not self.ext:
|
||||||
|
return False
|
||||||
|
return bool(self.ext.get("pol")) and bool(self.ext.get("pol_decision"))
|
||||||
|
|
||||||
|
def pol_decision(self) -> str:
|
||||||
|
"""Return pol_decision from ext, or empty string."""
|
||||||
|
if not self.ext:
|
||||||
|
return ""
|
||||||
|
return str(self.ext.get("pol_decision", ""))
|
||||||
62
workspace/packages/ect/ect/validate.py
Normal file
62
workspace/packages/ect/ect/validate.py
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
"""Validation helpers: ext size/depth, UUID, inp_hash/out_hash format."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
EXT_MAX_SIZE = 4096
|
||||||
|
EXT_MAX_DEPTH = 5
|
||||||
|
DEFAULT_MAX_PRED_LENGTH = 100
|
||||||
|
|
||||||
|
_UUID_RE = re.compile(
|
||||||
|
r"^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$"
|
||||||
|
)
|
||||||
|
def _json_depth(obj: Any, depth: int = 0) -> int:
|
||||||
|
if depth > EXT_MAX_DEPTH:
|
||||||
|
return depth
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
return max((_json_depth(v, depth + 1) for v in obj.values()), default=depth + 1)
|
||||||
|
if isinstance(obj, list):
|
||||||
|
return max((_json_depth(x, depth + 1) for x in obj), default=depth + 1)
|
||||||
|
return depth
|
||||||
|
|
||||||
|
|
||||||
|
def validate_ext(ext: dict[str, Any] | None) -> None:
|
||||||
|
"""Raise ValueError if ext exceeds EXT_MAX_SIZE or nesting depth EXT_MAX_DEPTH."""
|
||||||
|
if not ext:
|
||||||
|
return
|
||||||
|
raw = json.dumps(ext)
|
||||||
|
if len(raw.encode("utf-8")) > EXT_MAX_SIZE:
|
||||||
|
raise ValueError("ect: ext exceeds max size (4096 bytes)")
|
||||||
|
if _json_depth(ext) > EXT_MAX_DEPTH:
|
||||||
|
raise ValueError("ect: ext exceeds max nesting depth (5)")
|
||||||
|
|
||||||
|
|
||||||
|
def valid_uuid(s: str) -> bool:
|
||||||
|
"""Return True if s is a UUID string (RFC 9562)."""
|
||||||
|
return bool(_UUID_RE.match(s))
|
||||||
|
|
||||||
|
|
||||||
|
def validate_hash_format(s: str) -> None:
|
||||||
|
"""Raise ValueError if s is non-empty and not plain base64url per RFC 9449 / ECT spec.
|
||||||
|
|
||||||
|
The ECT spec (draft-nennemann-wimse-ect-01) and RFC 9449 specify
|
||||||
|
``base64url(SHA-256(data))`` — a plain base64url string without any
|
||||||
|
algorithm prefix. This matches how ACT handles hashes.
|
||||||
|
"""
|
||||||
|
if not s:
|
||||||
|
return
|
||||||
|
# Reject strings containing non-base64url characters.
|
||||||
|
# base64url alphabet: A-Z a-z 0-9 - _ (no padding '=' expected)
|
||||||
|
if not re.fullmatch(r"[A-Za-z0-9_-]+", s):
|
||||||
|
raise ValueError("ect: inp_hash/out_hash must be plain base64url (no prefix)")
|
||||||
|
# Verify it actually decodes.
|
||||||
|
pad = 4 - len(s) % 4
|
||||||
|
padded = s + "=" * pad if pad != 4 else s
|
||||||
|
try:
|
||||||
|
base64.urlsafe_b64decode(padded)
|
||||||
|
except Exception:
|
||||||
|
raise ValueError("ect: inp_hash/out_hash must be plain base64url (no prefix)") from None
|
||||||
154
workspace/packages/ect/ect/verify.py
Normal file
154
workspace/packages/ect/ect/verify.py
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
"""ECT verification per Section 7."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hmac
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Callable, Optional
|
||||||
|
|
||||||
|
import jwt
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePublicKey
|
||||||
|
|
||||||
|
from ect.types import ECT_TYPE, ECT_TYPE_LEGACY, Payload
|
||||||
|
from ect.dag import ECTStore, DAGConfig, validate_dag
|
||||||
|
from ect.validate import validate_ext, validate_hash_format, valid_uuid
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ParsedECT:
|
||||||
|
header: dict
|
||||||
|
payload: Payload
|
||||||
|
raw: str
|
||||||
|
|
||||||
|
|
||||||
|
KeyResolver = Callable[[str], Optional[EllipticCurvePublicKey]]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class VerifyOptions:
|
||||||
|
verifier_id: str = ""
|
||||||
|
resolve_key: Optional[KeyResolver] = None
|
||||||
|
store: Optional[ECTStore] = None
|
||||||
|
dag: Optional[DAGConfig] = None
|
||||||
|
now: Optional[int] = None # unix seconds; None = time.time()
|
||||||
|
iat_max_age_sec: int = 900
|
||||||
|
iat_max_future_sec: int = 30
|
||||||
|
jti_seen: Optional[Callable[[str], bool]] = None
|
||||||
|
wit_subject: str = ""
|
||||||
|
validate_uuids: bool = False
|
||||||
|
max_pred_length: int = 0 # 0 = no limit
|
||||||
|
on_verify_attempt: Optional[Callable[[str, Optional[Exception]], None]] = None # (jti, err) for observability
|
||||||
|
|
||||||
|
|
||||||
|
def default_verify_options() -> VerifyOptions:
|
||||||
|
from ect.dag import default_dag_config
|
||||||
|
return VerifyOptions(dag=default_dag_config())
|
||||||
|
|
||||||
|
|
||||||
|
def parse(compact: str) -> ParsedECT:
|
||||||
|
"""Parse compact JWS and return header + payload without verification."""
|
||||||
|
try:
|
||||||
|
unverified = jwt.decode(
|
||||||
|
compact,
|
||||||
|
options={"verify_signature": False, "verify_exp": False},
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
raise ValueError(f"ect: parse failed: {e}") from e
|
||||||
|
header = jwt.get_unverified_header(compact)
|
||||||
|
if header.get("alg") != "ES256":
|
||||||
|
raise ValueError("ect: expected ES256")
|
||||||
|
payload = Payload.from_claims(unverified)
|
||||||
|
return ParsedECT(header=header, payload=payload, raw=compact)
|
||||||
|
|
||||||
|
|
||||||
|
def verify(compact: str, opts: VerifyOptions) -> ParsedECT:
|
||||||
|
"""Full Section 7 verification and optional DAG validation."""
|
||||||
|
log_jti: list[str] = [""] # use list so callback sees updated jti
|
||||||
|
|
||||||
|
def set_log_jti(jti: str) -> None:
|
||||||
|
log_jti[0] = jti
|
||||||
|
|
||||||
|
err: Optional[Exception] = None
|
||||||
|
try:
|
||||||
|
return _verify_impl(compact, opts, set_log_jti)
|
||||||
|
except Exception as e:
|
||||||
|
err = e
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
if opts.on_verify_attempt is not None:
|
||||||
|
opts.on_verify_attempt(log_jti[0], err)
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_impl(compact: str, opts: VerifyOptions, set_log_jti: Callable[[str], None]) -> ParsedECT:
|
||||||
|
header = jwt.get_unverified_header(compact)
|
||||||
|
typ = header.get("typ") or ""
|
||||||
|
# Constant-time comparison for typ; accept both preferred and legacy values
|
||||||
|
if not hmac.compare_digest(typ, ECT_TYPE) and not hmac.compare_digest(typ, ECT_TYPE_LEGACY):
|
||||||
|
raise ValueError("ect: invalid typ parameter")
|
||||||
|
alg = header.get("alg")
|
||||||
|
if alg in ("none", "HS256", "HS384", "HS512"):
|
||||||
|
raise ValueError("ect: prohibited algorithm")
|
||||||
|
kid = header.get("kid")
|
||||||
|
if not kid:
|
||||||
|
raise ValueError("ect: missing kid")
|
||||||
|
if not opts.resolve_key:
|
||||||
|
raise ValueError("ect: ResolveKey required")
|
||||||
|
pub = opts.resolve_key(kid)
|
||||||
|
if pub is None:
|
||||||
|
raise ValueError("ect: unknown key identifier")
|
||||||
|
|
||||||
|
try:
|
||||||
|
claims = jwt.decode(
|
||||||
|
compact,
|
||||||
|
pub,
|
||||||
|
algorithms=["ES256"],
|
||||||
|
options={"verify_exp": False, "verify_aud": False, "verify_iat": False},
|
||||||
|
)
|
||||||
|
except jwt.InvalidSignatureError as e:
|
||||||
|
raise ValueError(f"ect: invalid signature: {e}") from e
|
||||||
|
except Exception as e:
|
||||||
|
raise ValueError(f"ect: verify failed: {e}") from e
|
||||||
|
|
||||||
|
payload = Payload.from_claims(claims)
|
||||||
|
set_log_jti(payload.jti)
|
||||||
|
|
||||||
|
validate_ext(payload.ext)
|
||||||
|
if opts.max_pred_length > 0 and len(payload.pred) > opts.max_pred_length:
|
||||||
|
raise ValueError("ect: pred exceeds max length")
|
||||||
|
if opts.validate_uuids:
|
||||||
|
if not valid_uuid(payload.jti):
|
||||||
|
raise ValueError("ect: jti must be UUID format")
|
||||||
|
if payload.wid and not valid_uuid(payload.wid):
|
||||||
|
raise ValueError("ect: wid must be UUID format when set")
|
||||||
|
if payload.inp_hash:
|
||||||
|
validate_hash_format(payload.inp_hash)
|
||||||
|
if payload.out_hash:
|
||||||
|
validate_hash_format(payload.out_hash)
|
||||||
|
|
||||||
|
if opts.wit_subject and payload.iss != opts.wit_subject:
|
||||||
|
raise ValueError("ect: issuer does not match WIT subject")
|
||||||
|
if opts.verifier_id and not payload.contains_audience(opts.verifier_id):
|
||||||
|
raise ValueError("ect: audience does not include verifier")
|
||||||
|
|
||||||
|
now = opts.now if opts.now is not None else int(time.time())
|
||||||
|
if now > payload.exp:
|
||||||
|
raise ValueError("ect: token expired")
|
||||||
|
if now - payload.iat > opts.iat_max_age_sec:
|
||||||
|
raise ValueError("ect: iat too far in the past")
|
||||||
|
if payload.iat > now + opts.iat_max_future_sec:
|
||||||
|
raise ValueError("ect: iat in the future")
|
||||||
|
|
||||||
|
# Required claims per spec: jti, exec_act, pred. pred may be set to [] when missing (from_claims already uses []).
|
||||||
|
if not payload.jti or not payload.exec_act:
|
||||||
|
raise ValueError("ect: missing required claims (jti, exec_act, pred)")
|
||||||
|
if payload.pred is None:
|
||||||
|
payload.pred = []
|
||||||
|
|
||||||
|
if opts.store is not None and opts.dag is not None:
|
||||||
|
validate_dag(payload, opts.store, opts.dag)
|
||||||
|
|
||||||
|
if opts.jti_seen is not None and opts.jti_seen(payload.jti):
|
||||||
|
raise ValueError("ect: jti already seen (replay)")
|
||||||
|
|
||||||
|
return ParsedECT(header=header, payload=payload, raw=compact)
|
||||||
25
workspace/packages/ect/pyproject.toml
Normal file
25
workspace/packages/ect/pyproject.toml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
[build-system]
|
||||||
|
requires = ["setuptools>=61", "wheel"]
|
||||||
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
|
[project]
|
||||||
|
name = "ietf-ect"
|
||||||
|
version = "0.1.0"
|
||||||
|
description = "WIMSE Execution Context Tokens (ECT) reference implementation"
|
||||||
|
requires-python = ">=3.9"
|
||||||
|
license = "BSD-3-Clause"
|
||||||
|
dependencies = [
|
||||||
|
"PyJWT>=2.8.0",
|
||||||
|
"cryptography>=42.0.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
dev = ["pytest>=7.0", "pytest-cov>=4.0"]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
testpaths = ["tests"]
|
||||||
|
pythonpath = ["."]
|
||||||
|
addopts = "--cov=ect --cov-report=term-missing --cov-fail-under=90 -v"
|
||||||
|
|
||||||
|
[tool.setuptools.packages.find]
|
||||||
|
include = ["ect*"]
|
||||||
1
workspace/packages/ect/testdata/valid_root_ect_payload.json
vendored
Normal file
1
workspace/packages/ect/testdata/valid_root_ect_payload.json
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"iss":"spiffe://example.com/agent/clinical","aud":"spiffe://example.com/agent/safety","iat":1772064150,"exp":1772064750,"jti":"7f3a8b2c-d1e4-4f56-9a0b-c3d4e5f6a7b8","wid":"a0b1c2d3-e4f5-6789-abcd-ef0123456789","exec_act":"recommend_treatment","pred":[],"ect_ext":{"pol":"clinical_reasoning_policy_v2","pol_decision":"approved"}}
|
||||||
1
workspace/packages/ect/tests/__init__.py
Normal file
1
workspace/packages/ect/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Tests package
|
||||||
49
workspace/packages/ect/tests/test_config.py
Normal file
49
workspace/packages/ect/tests/test_config.py
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
"""Tests for config module."""
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import default_config, load_config_from_env
|
||||||
|
from ect.config import ENV_IAT_MAX_AGE_MINUTES, ENV_JTI_REPLAY_CACHE_SIZE
|
||||||
|
|
||||||
|
|
||||||
|
def test_default_config():
|
||||||
|
c = default_config()
|
||||||
|
assert c.iat_max_age_sec == 900
|
||||||
|
assert c.jti_replay_size == 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_load_config_from_env():
|
||||||
|
os.environ[ENV_IAT_MAX_AGE_MINUTES] = "20"
|
||||||
|
os.environ[ENV_JTI_REPLAY_CACHE_SIZE] = "500"
|
||||||
|
try:
|
||||||
|
c = load_config_from_env()
|
||||||
|
assert c.iat_max_age_sec == 20 * 60
|
||||||
|
assert c.jti_replay_size == 500
|
||||||
|
finally:
|
||||||
|
os.environ.pop(ENV_IAT_MAX_AGE_MINUTES, None)
|
||||||
|
os.environ.pop(ENV_JTI_REPLAY_CACHE_SIZE, None)
|
||||||
|
|
||||||
|
|
||||||
|
def test_config_create_options():
|
||||||
|
c = default_config()
|
||||||
|
opts = c.create_options("my-kid")
|
||||||
|
assert opts.key_id == "my-kid"
|
||||||
|
assert opts.default_expiry_sec == c.default_expiry_sec
|
||||||
|
|
||||||
|
|
||||||
|
def test_config_verify_options():
|
||||||
|
c = default_config()
|
||||||
|
opts = c.verify_options()
|
||||||
|
assert opts.iat_max_age_sec == c.iat_max_age_sec
|
||||||
|
assert opts.dag is not None
|
||||||
|
|
||||||
|
|
||||||
|
def test_load_config_invalid_int():
|
||||||
|
os.environ[ENV_IAT_MAX_AGE_MINUTES] = "bad"
|
||||||
|
try:
|
||||||
|
c = load_config_from_env()
|
||||||
|
assert c.iat_max_age_sec == 900
|
||||||
|
finally:
|
||||||
|
os.environ.pop(ENV_IAT_MAX_AGE_MINUTES, None)
|
||||||
74
workspace/packages/ect/tests/test_create.py
Normal file
74
workspace/packages/ect/tests/test_create.py
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
"""Tests for ECT creation and roundtrip."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import (
|
||||||
|
Payload,
|
||||||
|
create,
|
||||||
|
generate_key,
|
||||||
|
CreateOptions,
|
||||||
|
verify,
|
||||||
|
VerifyOptions,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_roundtrip():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
payload = Payload(
|
||||||
|
iss="spiffe://example.com/agent/a",
|
||||||
|
aud=["spiffe://example.com/agent/b"],
|
||||||
|
iat=now,
|
||||||
|
exp=now + 600,
|
||||||
|
jti="e4f5a6b7-c8d9-0123-ef01-234567890abc",
|
||||||
|
exec_act="review_spec",
|
||||||
|
pred=[],
|
||||||
|
)
|
||||||
|
compact = create(payload, key, CreateOptions(key_id="agent-a-key-1"))
|
||||||
|
assert compact
|
||||||
|
|
||||||
|
def resolver(kid):
|
||||||
|
if kid == "agent-a-key-1":
|
||||||
|
return key.public_key()
|
||||||
|
return None
|
||||||
|
|
||||||
|
opts = VerifyOptions(
|
||||||
|
verifier_id="spiffe://example.com/agent/b",
|
||||||
|
resolve_key=resolver,
|
||||||
|
now=now,
|
||||||
|
)
|
||||||
|
parsed = verify(compact, opts)
|
||||||
|
assert parsed.payload.jti == payload.jti
|
||||||
|
assert parsed.payload.exec_act == payload.exec_act
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_with_test_vector():
|
||||||
|
path = os.path.join(os.path.dirname(__file__), "..", "testdata", "valid_root_ect_payload.json")
|
||||||
|
if not os.path.exists(path):
|
||||||
|
pytest.skip(f"test vector not found: {path}")
|
||||||
|
with open(path) as f:
|
||||||
|
data = json.load(f)
|
||||||
|
payload = Payload.from_claims(data)
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
payload.iat = now
|
||||||
|
payload.exp = now + 600
|
||||||
|
|
||||||
|
compact = create(payload, key, CreateOptions(key_id="test-kid"))
|
||||||
|
assert compact
|
||||||
|
|
||||||
|
def resolver(kid):
|
||||||
|
if kid == "test-kid":
|
||||||
|
return key.public_key()
|
||||||
|
return None
|
||||||
|
|
||||||
|
opts = VerifyOptions(
|
||||||
|
verifier_id=payload.aud[0],
|
||||||
|
resolve_key=resolver,
|
||||||
|
now=now,
|
||||||
|
)
|
||||||
|
verify(compact, opts)
|
||||||
94
workspace/packages/ect/tests/test_create_extra.py
Normal file
94
workspace/packages/ect/tests/test_create_extra.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
"""Additional tests for create module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import Payload, create, generate_key, CreateOptions, default_create_options
|
||||||
|
|
||||||
|
|
||||||
|
def test_default_create_options():
|
||||||
|
opts = default_create_options()
|
||||||
|
assert opts.key_id == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_errors():
|
||||||
|
key = generate_key()
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=1, exp=2, jti="j", exec_act="e", pred=[])
|
||||||
|
with pytest.raises(ValueError, match="KeyID|required"):
|
||||||
|
create(p, key, CreateOptions(key_id=""))
|
||||||
|
with pytest.raises((ValueError, TypeError, AttributeError)):
|
||||||
|
create(None, key, CreateOptions(key_id="k"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_optional_pol():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["a"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-nopol", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
assert compact
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_validation_errors():
|
||||||
|
key = generate_key()
|
||||||
|
base = dict(iss="i", aud=["a"], iat=1, exp=2, jti="j", exec_act="e", pred=[])
|
||||||
|
with pytest.raises(ValueError, match="iss"):
|
||||||
|
create(Payload(**{**base, "iss": ""}), key, CreateOptions(key_id="k"))
|
||||||
|
with pytest.raises(ValueError, match="aud"):
|
||||||
|
create(Payload(**{**base, "aud": []}), key, CreateOptions(key_id="k"))
|
||||||
|
with pytest.raises(ValueError, match="jti"):
|
||||||
|
create(Payload(**{**base, "jti": ""}), key, CreateOptions(key_id="k"))
|
||||||
|
with pytest.raises(ValueError, match="exec_act"):
|
||||||
|
create(Payload(**{**base, "exec_act": ""}), key, CreateOptions(key_id="k"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_ext_compensation_reason_requires_required():
|
||||||
|
key = generate_key()
|
||||||
|
p = Payload(
|
||||||
|
iss="i", aud=["a"], iat=1, exp=2, jti="j", exec_act="e", pred=[],
|
||||||
|
ext={"compensation_reason": "rollback", "compensation_required": False},
|
||||||
|
)
|
||||||
|
with pytest.raises(ValueError, match="compensation_required"):
|
||||||
|
create(p, key, CreateOptions(key_id="k"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_zero_expiry_uses_default():
|
||||||
|
key = generate_key()
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=0, exp=0, jti="j", exec_act="e", pred=[])
|
||||||
|
compact = create(p, key, CreateOptions(key_id="k", default_expiry_sec=300))
|
||||||
|
assert compact
|
||||||
|
# create() works on a copy; decode the token to verify defaults were applied
|
||||||
|
import jwt
|
||||||
|
claims = jwt.decode(compact, options={"verify_signature": False})
|
||||||
|
assert claims["exp"] > claims["iat"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_validate_uuids_rejects_non_uuid_jti():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=now, exp=now + 3600, jti="not-a-uuid", exec_act="e", pred=[])
|
||||||
|
with pytest.raises(ValueError, match="jti must be UUID"):
|
||||||
|
create(p, key, CreateOptions(key_id="k", validate_uuids=True))
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_max_pred_length():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=now, exp=now + 3600, jti="550e8400-e29b-41d4-a716-446655440000", exec_act="e", pred=["p1", "p2"])
|
||||||
|
with pytest.raises(ValueError, match="pred exceeds max length"):
|
||||||
|
create(p, key, CreateOptions(key_id="k", max_pred_length=1))
|
||||||
|
|
||||||
|
|
||||||
|
def test_create_ext_size_rejected():
|
||||||
|
from ect.validate import EXT_MAX_SIZE
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="i", aud=["a"], iat=now, exp=now + 3600, jti="550e8400-e29b-41d4-a716-446655440000", exec_act="e", pred=[],
|
||||||
|
ext={"x": "y" * (EXT_MAX_SIZE - 5)},
|
||||||
|
)
|
||||||
|
with pytest.raises(ValueError, match="ext exceeds max size"):
|
||||||
|
create(p, key, CreateOptions(key_id="k"))
|
||||||
111
workspace/packages/ect/tests/test_dag.py
Normal file
111
workspace/packages/ect/tests/test_dag.py
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
"""Tests for DAG validation."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import Payload, MemoryLedger, validate_dag, default_dag_config
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_dag_root():
|
||||||
|
store = MemoryLedger()
|
||||||
|
payload = Payload(
|
||||||
|
iss="",
|
||||||
|
aud=[],
|
||||||
|
iat=0,
|
||||||
|
exp=0,
|
||||||
|
jti="jti-001",
|
||||||
|
exec_act="",
|
||||||
|
pred=[],
|
||||||
|
wid="wf-1",
|
||||||
|
)
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_dag_duplicate_jti():
|
||||||
|
store = MemoryLedger()
|
||||||
|
p = Payload(
|
||||||
|
iss="x",
|
||||||
|
aud=["y"],
|
||||||
|
iat=0,
|
||||||
|
exp=0,
|
||||||
|
jti="jti-001",
|
||||||
|
exec_act="a",
|
||||||
|
pred=[],
|
||||||
|
wid="wf-1",
|
||||||
|
)
|
||||||
|
store.append("dummy-jws", p)
|
||||||
|
payload = Payload(
|
||||||
|
iss="",
|
||||||
|
aud=[],
|
||||||
|
iat=0,
|
||||||
|
exp=0,
|
||||||
|
jti="jti-001",
|
||||||
|
exec_act="",
|
||||||
|
pred=[],
|
||||||
|
wid="wf-1",
|
||||||
|
)
|
||||||
|
with pytest.raises(ValueError, match="task ID.*already exists"):
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_dag_pred_exists():
|
||||||
|
store = MemoryLedger()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="x",
|
||||||
|
aud=["y"],
|
||||||
|
iat=now - 60,
|
||||||
|
exp=now + 600,
|
||||||
|
jti="jti-001",
|
||||||
|
exec_act="a",
|
||||||
|
pred=[],
|
||||||
|
wid="wf-1",
|
||||||
|
)
|
||||||
|
store.append("jws1", p)
|
||||||
|
payload = Payload(
|
||||||
|
iss="",
|
||||||
|
aud=[],
|
||||||
|
iat=now,
|
||||||
|
exp=now + 600,
|
||||||
|
jti="jti-002",
|
||||||
|
exec_act="b",
|
||||||
|
pred=["jti-001"],
|
||||||
|
wid="wf-1",
|
||||||
|
)
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_dag_pred_not_found():
|
||||||
|
store = MemoryLedger()
|
||||||
|
now = int(time.time())
|
||||||
|
payload = Payload(
|
||||||
|
iss="",
|
||||||
|
aud=[],
|
||||||
|
iat=now,
|
||||||
|
exp=now + 600,
|
||||||
|
jti="jti-002",
|
||||||
|
exec_act="",
|
||||||
|
pred=["jti-missing"],
|
||||||
|
)
|
||||||
|
with pytest.raises(ValueError, match="predecessor task not found"):
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_dag_pred_policy_rejected_requires_compensation():
|
||||||
|
store = MemoryLedger()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="x", aud=["y"], iat=now - 60, exp=now + 600,
|
||||||
|
jti="jti-rej", exec_act="a", pred=[], wid="wf-1",
|
||||||
|
ext={"pol": "p", "pol_decision": "rejected"},
|
||||||
|
)
|
||||||
|
store.append("jws1", p)
|
||||||
|
payload = Payload(
|
||||||
|
iss="", aud=[], iat=now, exp=now + 600,
|
||||||
|
jti="jti-child", exec_act="b", pred=["jti-rej"], wid="wf-1",
|
||||||
|
)
|
||||||
|
with pytest.raises(ValueError, match="compensation"):
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
|
payload.ext = {"compensation_required": True}
|
||||||
|
validate_dag(payload, store, default_dag_config())
|
||||||
40
workspace/packages/ect/tests/test_jti_cache.py
Normal file
40
workspace/packages/ect/tests/test_jti_cache.py
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
"""Tests for JTI replay cache."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import new_jti_cache
|
||||||
|
|
||||||
|
|
||||||
|
def test_jti_cache_seen_and_add():
|
||||||
|
cache = new_jti_cache(10, 60)
|
||||||
|
assert cache.seen("jti-1") is False
|
||||||
|
cache.add("jti-1")
|
||||||
|
assert cache.seen("jti-1") is True
|
||||||
|
assert cache.seen("jti-2") is False
|
||||||
|
cache.add("jti-2")
|
||||||
|
assert cache.seen("jti-2") is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_jti_cache_expiry():
|
||||||
|
cache = new_jti_cache(10, 1) # 1 second TTL
|
||||||
|
cache.add("jti-1")
|
||||||
|
assert cache.seen("jti-1") is True
|
||||||
|
time.sleep(1.1)
|
||||||
|
assert cache.seen("jti-1") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_jti_cache_max_size_eviction():
|
||||||
|
cache = new_jti_cache(2, 60)
|
||||||
|
cache.add("jti-1")
|
||||||
|
cache.add("jti-2")
|
||||||
|
cache.add("jti-3")
|
||||||
|
assert cache.seen("jti-3") is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_jti_cache_add_when_already_present():
|
||||||
|
cache = new_jti_cache(2, 60)
|
||||||
|
cache.add("jti-1")
|
||||||
|
cache.add("jti-1")
|
||||||
|
assert cache.seen("jti-1") is True
|
||||||
38
workspace/packages/ect/tests/test_ledger_extra.py
Normal file
38
workspace/packages/ect/tests/test_ledger_extra.py
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
"""Additional tests for ledger module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import Payload, MemoryLedger, ErrTaskIDExists
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_append_and_get():
|
||||||
|
m = MemoryLedger()
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=1, exp=2, jti="j1", exec_act="act", pred=[])
|
||||||
|
seq = m.append("jws1", p)
|
||||||
|
assert seq == 1
|
||||||
|
assert m.get_by_tid("j1").jti == "j1"
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_err_task_id_exists():
|
||||||
|
m = MemoryLedger()
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=1, exp=2, jti="j-dup", exec_act="e", pred=[])
|
||||||
|
m.append("jws1", p)
|
||||||
|
with pytest.raises(ErrTaskIDExists):
|
||||||
|
m.append("jws2", p)
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_contains_wid():
|
||||||
|
m = MemoryLedger()
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=1, exp=2, jti="j1", exec_act="e", pred=[], wid="wf1")
|
||||||
|
m.append("jws", p)
|
||||||
|
assert m.contains("j1", "") is True
|
||||||
|
assert m.contains("j1", "wf1") is True
|
||||||
|
assert m.contains("j1", "wf2") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_append_none():
|
||||||
|
m = MemoryLedger()
|
||||||
|
seq = m.append("jws", None)
|
||||||
|
assert seq == 0
|
||||||
64
workspace/packages/ect/tests/test_types_extra.py
Normal file
64
workspace/packages/ect/tests/test_types_extra.py
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
"""Additional tests for types module."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import Payload
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_contains_audience():
|
||||||
|
p = Payload(iss="", aud=["a", "b"], iat=0, exp=0, jti="", exec_act="", pred=[])
|
||||||
|
assert p.contains_audience("a") is True
|
||||||
|
assert p.contains_audience("c") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_compensation_required():
|
||||||
|
p = Payload(iss="", aud=[], iat=0, exp=0, jti="", exec_act="", pred=[])
|
||||||
|
assert p.compensation_required() is False
|
||||||
|
p.ext = {"compensation_required": True}
|
||||||
|
assert p.compensation_required() is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_has_policy_claims():
|
||||||
|
p = Payload(iss="", aud=[], iat=0, exp=0, jti="", exec_act="", pred=[],
|
||||||
|
ext={"pol": "p", "pol_decision": "approved"})
|
||||||
|
assert p.has_policy_claims() is True
|
||||||
|
p.ext = {"pol_decision": "approved"}
|
||||||
|
assert p.has_policy_claims() is False
|
||||||
|
p.ext = None
|
||||||
|
assert p.has_policy_claims() is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_pol_decision():
|
||||||
|
p = Payload(iss="", aud=[], iat=0, exp=0, jti="", exec_act="", pred=[],
|
||||||
|
ext={"pol_decision": "rejected"})
|
||||||
|
assert p.pol_decision() == "rejected"
|
||||||
|
p.ext = None
|
||||||
|
assert p.pol_decision() == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_to_claims_optional():
|
||||||
|
p = Payload(iss="i", aud=["a"], iat=1, exp=2, jti="j", exec_act="e", pred=[], wid="wf")
|
||||||
|
claims = p.to_claims()
|
||||||
|
assert claims["wid"] == "wf"
|
||||||
|
assert "ect_ext" not in claims or not claims.get("ect_ext")
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_from_claims_aud_string():
|
||||||
|
claims = {"iss": "i", "aud": "single", "iat": 1, "exp": 2, "jti": "j", "exec_act": "e", "pred": []}
|
||||||
|
p = Payload.from_claims(claims)
|
||||||
|
assert p.aud == ["single"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_payload_to_claims_all_optional():
|
||||||
|
p = Payload(
|
||||||
|
iss="i", aud=["a"], iat=1, exp=2, jti="j", exec_act="e", pred=[],
|
||||||
|
wid="w", inp_hash="h", out_hash="o", inp_classification="c",
|
||||||
|
ext={"pol": "p", "pol_decision": "approved"},
|
||||||
|
)
|
||||||
|
claims = p.to_claims()
|
||||||
|
assert claims["wid"] == "w"
|
||||||
|
assert claims["inp_hash"] == "h"
|
||||||
|
assert claims["out_hash"] == "o"
|
||||||
|
assert claims["inp_classification"] == "c"
|
||||||
|
assert claims["ect_ext"]["pol"] == "p"
|
||||||
|
assert claims["ect_ext"]["pol_decision"] == "approved"
|
||||||
64
workspace/packages/ect/tests/test_validate.py
Normal file
64
workspace/packages/ect/tests/test_validate.py
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
"""Tests for validate module."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect.validate import (
|
||||||
|
EXT_MAX_DEPTH,
|
||||||
|
EXT_MAX_SIZE,
|
||||||
|
validate_ext,
|
||||||
|
validate_hash_format,
|
||||||
|
valid_uuid,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_valid_uuid():
|
||||||
|
assert valid_uuid("550e8400-e29b-41d4-a716-446655440000") is True
|
||||||
|
assert valid_uuid("00000000-0000-0000-0000-000000000000") is True
|
||||||
|
assert valid_uuid("") is False
|
||||||
|
assert valid_uuid("not-a-uuid") is False
|
||||||
|
assert valid_uuid("550e8400e29b41d4a716446655440000") is False # no dashes
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_ext_none():
|
||||||
|
validate_ext(None)
|
||||||
|
validate_ext({})
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_ext_size():
|
||||||
|
# Serialized JSON must exceed EXT_MAX_SIZE (4096) bytes
|
||||||
|
big = {"x": "y" * (EXT_MAX_SIZE - 2)} # "{\"x\":\"...\"}" + payload
|
||||||
|
raw = json.dumps(big)
|
||||||
|
assert len(raw.encode("utf-8")) > EXT_MAX_SIZE
|
||||||
|
with pytest.raises(ValueError, match="max size"):
|
||||||
|
validate_ext(big)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_ext_depth():
|
||||||
|
deep = {"a": 1}
|
||||||
|
for _ in range(EXT_MAX_DEPTH):
|
||||||
|
deep = {"n": deep}
|
||||||
|
with pytest.raises(ValueError, match="depth"):
|
||||||
|
validate_ext(deep)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_hash_format_empty():
|
||||||
|
validate_hash_format("")
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_hash_format_ok():
|
||||||
|
# Plain base64url per RFC 9449 / ECT spec (no algorithm prefix)
|
||||||
|
validate_hash_format("YQ")
|
||||||
|
validate_hash_format("dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk")
|
||||||
|
validate_hash_format("abc123-_XYZ")
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_hash_format_bad():
|
||||||
|
# Colon is not valid base64url — rejects old prefixed format
|
||||||
|
with pytest.raises(ValueError, match="plain base64url"):
|
||||||
|
validate_hash_format("sha-256:YQ")
|
||||||
|
with pytest.raises(ValueError, match="plain base64url"):
|
||||||
|
validate_hash_format("not valid!!")
|
||||||
|
# Null byte in payload
|
||||||
|
with pytest.raises(ValueError, match="plain base64url"):
|
||||||
|
validate_hash_format("YQ\x00")
|
||||||
194
workspace/packages/ect/tests/test_verify.py
Normal file
194
workspace/packages/ect/tests/test_verify.py
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
"""Tests for verify module."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ect import (
|
||||||
|
Payload,
|
||||||
|
create,
|
||||||
|
generate_key,
|
||||||
|
CreateOptions,
|
||||||
|
parse,
|
||||||
|
verify,
|
||||||
|
VerifyOptions,
|
||||||
|
default_verify_options,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["a"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-parse", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
parsed = parse(compact)
|
||||||
|
assert parsed.payload.jti == "jti-parse"
|
||||||
|
assert parsed.raw == compact
|
||||||
|
|
||||||
|
|
||||||
|
def test_default_verify_options():
|
||||||
|
opts = default_verify_options()
|
||||||
|
assert opts.dag is not None
|
||||||
|
assert opts.iat_max_age_sec == 900
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_expired():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now - 3600, exp=now - 60,
|
||||||
|
jti="jti-exp", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
with pytest.raises(ValueError, match="expired"):
|
||||||
|
verify(compact, VerifyOptions(verifier_id="v", resolve_key=resolver, now=now))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_replay():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-replay", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
with pytest.raises(ValueError, match="replay"):
|
||||||
|
verify(compact, VerifyOptions(
|
||||||
|
verifier_id="v", resolve_key=resolver, now=now,
|
||||||
|
jti_seen=lambda j: j == "jti-replay",
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_invalid_typ():
|
||||||
|
import jwt as jwt_lib
|
||||||
|
with pytest.raises((ValueError, jwt_lib.exceptions.DecodeError)):
|
||||||
|
verify("not-a-jws", VerifyOptions())
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_audience_mismatch():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["other"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-a", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
with pytest.raises(ValueError, match="audience"):
|
||||||
|
verify(compact, VerifyOptions(verifier_id="verifier", resolve_key=resolver, now=now))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_wit_subject_mismatch():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="wrong-iss", aud=["v"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-w", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
with pytest.raises(ValueError, match="WIT subject"):
|
||||||
|
verify(compact, VerifyOptions(
|
||||||
|
verifier_id="v", resolve_key=resolver, now=now, wit_subject="correct-iss",
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_iat_too_old():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now - 2000, exp=now + 3600,
|
||||||
|
jti="jti-old", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
with pytest.raises(ValueError, match="iat"):
|
||||||
|
verify(compact, VerifyOptions(
|
||||||
|
verifier_id="v", resolve_key=resolver, now=now, iat_max_age_sec=900,
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_unknown_key():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-k", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: None # unknown key
|
||||||
|
with pytest.raises(ValueError, match="unknown key"):
|
||||||
|
verify(compact, VerifyOptions(verifier_id="v", resolve_key=resolver, now=now))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_resolve_key_required():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-r", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
with pytest.raises(ValueError, match="ResolveKey"):
|
||||||
|
verify(compact, VerifyOptions(verifier_id="v", resolve_key=None))
|
||||||
|
|
||||||
|
|
||||||
|
def test_verify_with_dag():
|
||||||
|
from ect import MemoryLedger
|
||||||
|
key = generate_key()
|
||||||
|
ledger = MemoryLedger()
|
||||||
|
now = int(time.time())
|
||||||
|
root = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now, exp=now + 3600,
|
||||||
|
jti="jti-root", exec_act="act", pred=[],
|
||||||
|
)
|
||||||
|
compact_root = create(root, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda kid: key.public_key() if kid == "kid" else None
|
||||||
|
opts = VerifyOptions(verifier_id="v", resolve_key=resolver, store=ledger, now=now)
|
||||||
|
parsed = verify(compact_root, opts)
|
||||||
|
ledger.append(compact_root, parsed.payload)
|
||||||
|
child = Payload(
|
||||||
|
iss="iss", aud=["v"], iat=now + 1, exp=now + 3600,
|
||||||
|
jti="jti-child", exec_act="act2", pred=["jti-root"],
|
||||||
|
)
|
||||||
|
compact_child = create(child, key, CreateOptions(key_id="kid"))
|
||||||
|
parsed2 = verify(compact_child, opts)
|
||||||
|
assert parsed2.payload.jti == "jti-child"
|
||||||
|
|
||||||
|
|
||||||
|
def test_on_verify_attempt_callback():
|
||||||
|
"""Observability: on_verify_attempt is called with jti and error (or None)."""
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(iss="i", aud=["v"], iat=now, exp=now + 3600, jti="jti-obs", exec_act="a", pred=[])
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda k: key.public_key() if k == "kid" else None
|
||||||
|
seen = []
|
||||||
|
def hook(jti, err):
|
||||||
|
seen.append((jti, err))
|
||||||
|
opts = VerifyOptions(verifier_id="v", resolve_key=resolver, on_verify_attempt=hook)
|
||||||
|
result = verify(compact, opts)
|
||||||
|
assert result.payload.jti == "jti-obs"
|
||||||
|
assert len(seen) == 1
|
||||||
|
assert seen[0][0] == "jti-obs"
|
||||||
|
assert seen[0][1] is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_on_verify_attempt_called_on_failure():
|
||||||
|
key = generate_key()
|
||||||
|
now = int(time.time())
|
||||||
|
p = Payload(iss="i", aud=["v"], iat=now, exp=now - 1, jti="jti-fail", exec_act="a", pred=[])
|
||||||
|
compact = create(p, key, CreateOptions(key_id="kid"))
|
||||||
|
resolver = lambda k: key.public_key() if k == "kid" else None
|
||||||
|
seen = []
|
||||||
|
opts = VerifyOptions(verifier_id="v", resolve_key=resolver, now=now, on_verify_attempt=lambda jti, err: seen.append((jti, err)))
|
||||||
|
with pytest.raises(ValueError, match="expired"):
|
||||||
|
verify(compact, opts)
|
||||||
|
assert len(seen) == 1
|
||||||
|
assert seen[0][0] == "jti-fail"
|
||||||
|
assert seen[0][1] is not None
|
||||||
2
workspace/packages/pyproject.toml
Normal file
2
workspace/packages/pyproject.toml
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
[tool.uv.workspace]
|
||||||
|
members = ["act", "ect"]
|
||||||
Reference in New Issue
Block a user