# PoC Plan — ACT + ECT over MCP with LangGraph **Target**: end-to-end working demo for IETF 123 preparation and draft credibility. **Location**: `demo/act-ect-mcp/` (moved out of `workspace/poc/` to be a first-class peer of `paper/` and `workspace/`) **Date**: 2026-04-12 ## Scenario A user issues a **mandate** (ACT authorization token) to a LangGraph agent: "research topic X, produce summary." The agent, running in a LangGraph `StateGraph`, calls two tools exposed by an MCP server: 1. `search(query)` — returns fake hits 2. `summarize(text)` — returns fake summary Every MCP tool call is authenticated by: - **ACT mandate** in `Authorization: Bearer ` header (capability check: `cap=["mcp.search","mcp.summarize"]`) - **ECT execution context** in a signed HTTP header per `draft-ietf-wimse-http-signature-03` using the `wimse-aud` signature parameter - Tool-call body hashed into ECT `inp_hash` On tool success, the agent mints an **ACT execution record** with: - `status="completed"` - `pred=[mandate.jti]` for the first call, `pred=[mandate.jti, prev_exec.jti]` for subsequent calls (DAG, not linear) - `inp_hash` / `out_hash` bound to request/response bodies A standalone `verify` CLI walks the resulting ACT ledger + ECT store and prints the DAG. ## Components ``` poc/mcp-langgraph/ ├── README.md # how to run, what it proves ├── pyproject.toml # depends on ietf-act, ietf-ect, mcp, langgraph, fastapi ├── keys/ # generated ES256 keys (gitignored) ├── src/ │ ├── keys.py # key gen + JWKS loader │ ├── tokens.py # ACT mandate/exec minters, ECT header builder │ ├── http_sig.py # minimal http-signature-03 signer/verifier │ ├── server.py # MCP server (FastMCP streamable-http) + ECT/ACT middleware │ ├── agent.py # LangGraph agent + MCP client, injects ACT+ECT │ └── verify_cli.py # walks ledger, prints DAG ├── demo.sh # end-to-end: start server, run agent, run verifier └── tests/ ├── test_token_flow.py # one tool call → one exec record linked via pred └── test_dag_shape.py # two tool calls → DAG with expected pred edges ``` ## Build sequence (verifiable increments) 1. **Skeleton + deps**: `pyproject.toml`, package layout, install works. 2. **Keys + tokens**: `keys.py`, `tokens.py`. Unit test: mint mandate, mint exec, verify both. 3. **HTTP signature**: `http_sig.py`. Unit test: sign request, verify (round-trip). 4. **MCP server**: FastMCP with two fake tools + ASGI middleware that verifies ACT+ECT on every call. Manual curl test. 5. **LangGraph agent**: StateGraph, `langchain-mcp-adapters` with custom headers callback. No LLM — scripted node sequence that calls both tools. 6. **Verifier CLI**: prints the DAG (mandate → exec1 → exec2). 7. **End-to-end `demo.sh`**: spawns server, runs agent, runs verifier. Green = PoC done. ## Explicit non-goals - Real LLM via Ollama (local, zero API cost). `create_react_agent` decides which MCP tools to call. Token flow is deterministic regardless of LLM output. - No registration / discovery of agents — assume pre-shared JWKS. - No distributed SCITT anchoring — in-memory ledger only. - No Go interop in this PoC (Python + Python). Go path tracked separately. ## Tradeoffs **Real MCP vs. MCP-shaped**: use real MCP SDK (FastMCP server + `streamablehttp_client`). More credible for IETF reviewers; adds dep weight. If the MCP SDK blocks us from injecting ECT on the HTTP layer, fall back to FastAPI endpoints that mirror the MCP JSON-RPC shape. **LLM**: real. Ollama `qwen3:8b` as default (local, free, reproducible per CLAUDE.md cost policy). Swap to Anthropic/OpenAI via env var. LangGraph `create_react_agent` wires LLM + MCP tools. ## Success criteria - `demo.sh` exits 0, verifier prints a DAG with 1 mandate + 2 execs + correct `pred` edges. - All unit tests green. - README shows a reviewer can reproduce in under 5 minutes (`uv sync && ./demo.sh`). ## Questions before coding - OK to put PoC under `workspace/poc/mcp-langgraph/` (new dir), not under `packages/`? - OK to pin `mcp>=1.0`, `langgraph>=0.2`, `langchain-mcp-adapters` as deps? - LangGraph without an LLM is fine for v1 of the PoC, right?