]> Agent Failure Cascade Prevention and Rollback Independent Researcher
ietf@nennemann.de
OPS NMOP cascade prevention circuit breaker rollback failure domain agent recovery This document defines protocols for preventing agent failures from cascading across interconnected autonomous systems and standardized mechanisms for real-time rollback of incorrect agent decisions. It specifies a circuit breaker protocol with well-defined state transitions, failure domain isolation through bulkhead patterns, cascade detection via error rate and latency analysis, and a distributed rollback coordination protocol that walks the Execution Context Token (ECT) DAG backwards to revert agent actions to a known-good state. This document absorbs and supersedes the concepts introduced in earlier AERR and ATD proposals.
Introduction Autonomous AI agents increasingly operate in interconnected multi-agent systems where a single agent's failure can propagate through the network, causing widespread service disruption. The IETF gap analysis identified two critical gaps in existing standards: Gap 2 (Cascade Prevention): No standard mechanism exists for containing failures within agent ecosystems. When one agent fails, dependent agents continue sending requests to the failing agent, amplifying the failure across the system. Gap 4 (Rollback): No standard protocol exists for reverting incorrect agent decisions. When an autonomous agent misconfigures a network device or makes an erroneous API call, there is no interoperable way to undo the action or coordinate rollback across multiple affected agents. This document addresses both gaps by defining: A circuit breaker protocol that stops failure propagation between agents. Failure domain isolation mechanisms that contain blast radius. Cascade detection signals that identify propagating failures early. A distributed rollback protocol that coordinates state reversion across multiple agents using the ECT DAG . This specification absorbs and supersedes the concepts from the earlier Agent Error Recovery and Rollback (AERR) and Agent Task DAG (ATD) proposals, consolidating cascade prevention and rollback into a single coherent protocol built on ECT infrastructure. Design principles: Agents that take consequential actions MUST be able to undo them, or MUST declare them irreversible upfront. Failure containment takes priority over failure diagnosis. The protocol adds minimal overhead to the happy path. All cascade prevention and rollback actions are recorded as ECT nodes, providing a cryptographic audit trail.
Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 when, and only when, they appear in all capitals, as shown here.
Circuit Breaker:
A mechanism that stops an agent from propagating requests to a failing downstream agent, preventing cascading failures. Modeled after the electrical circuit breaker pattern used in microservice architectures.
Failure Domain:
A bounded set of agents and resources within which a failure is contained. Failures within a domain MUST NOT propagate beyond the domain boundary without explicit escalation.
Blast Radius:
The set of agents and systems affected by a single agent's failure, determinable by traversing the ECT DAG forward from the failing node.
Cascade Detection:
The process of identifying that a failure is propagating across agent boundaries, using signals such as error rate spikes, latency increases, and resource exhaustion patterns.
Rollback Coordinator:
An agent or orchestrator responsible for coordinating distributed rollback across multiple agents in a workflow, ensuring consistency and resolving conflicts.
Checkpoint:
An ECT node recording an agent's state hash before a consequential action, providing a restore point for rollback.
Compensating Action:
An action that semantically reverses the effect of a prior action when direct state restoration is not possible (e.g., deleting a resource that was created, rather than restoring a pre-creation snapshot).
Recovery Point:
The most recent checkpoint in the ECT DAG to which an agent or workflow can be safely rolled back without violating consistency constraints.
Failure Cascade Prevention
Cascade Model When an agent fails in a multi-agent system, the failure can propagate through multiple vectors. The following diagram illustrates a typical cascade scenario:
| | | | | request | | | |--------------->| | | | | request | | | |--------------->| | | | | | | | FAILURE | | | |<--- X ---------| | | | | | | error/timeout | | | |<---------------| | | | | | | error/timeout | | | |<---------------| | | | | | | | [CASCADE: all agents impacted by D's failure] | | | | | ]]>
Failure Domain Taxonomy Failures in agent ecosystems fall into the following categories:
Agent-Local Failure:
A failure confined to a single agent instance (e.g., out-of-memory, logic error). The blast radius is limited to the agent itself and its immediate callers.
Service Failure:
A failure affecting all instances of a particular agent service (e.g., model endpoint unavailable). The blast radius includes all agents that depend on the failing service.
Infrastructure Failure:
A failure in shared infrastructure (e.g., network partition, certificate authority unavailable). The blast radius may span multiple failure domains.
Semantic Failure:
An agent produces incorrect output without raising an error (e.g., misconfiguration, wrong decision). This is the hardest category to detect and may propagate silently through the DAG.
Propagation Vectors in Agent Ecosystems Failures propagate through the following vectors: Synchronous request chains: An agent blocks waiting for a failing downstream agent, causing its own callers to time out. Shared state corruption: An agent writes incorrect data to a shared store, causing other agents reading that data to fail or make incorrect decisions. Resource exhaustion: A failing agent consumes excessive resources (connections, memory, compute), starving healthy agents. Retry amplification: Multiple agents retry requests to a failing agent simultaneously, overwhelming it further.
Circuit Breaker Protocol Each agent MUST implement a circuit breaker for every downstream agent it communicates with.
States The circuit breaker has four states:
CLOSED (normal):
Requests flow through normally. The agent tracks the error rate over a sliding window (default: 60 seconds).
OPEN (failure detected):
When the error rate exceeds the configured threshold (default: 50% over the window), the breaker opens. All requests to the downstream agent are immediately rejected locally. The agent MUST emit an ECT with exec_act value "circuit_breaker_open".
HALF_OPEN (recovery probe):
After a cooldown period (default: 30 seconds), the breaker transitions to HALF_OPEN and allows a single probe request. If the probe succeeds, the breaker returns to CLOSED. If the probe fails, the breaker returns to OPEN with doubled cooldown (exponential backoff, maximum 300 seconds).
CLOSED (recovered):
When a probe succeeds in the HALF_OPEN state, the breaker returns to CLOSED and the agent MUST emit an ECT with exec_act value "circuit_breaker_close".
State Transition Rules
threshold CLOSED ────────────────────────────────► OPEN ▲ │ │ probe succeeds │ cooldown expires │ ▼ └──────────────────────────────── HALF_OPEN │ probe fails │ ▼ OPEN (cooldown *= 2, max 300s) ]]>
The following rules govern state transitions: CLOSED to OPEN: The error rate over the sliding window exceeds the configured threshold. The agent MUST emit a "circuit_breaker_open" ECT and reject all subsequent requests to the downstream agent. OPEN to HALF_OPEN: The cooldown timer expires. The agent MUST allow exactly one probe request through. HALF_OPEN to CLOSED: The probe request succeeds. The agent MUST emit a "circuit_breaker_close" ECT and resume normal operation. The error rate counters MUST be reset. HALF_OPEN to OPEN: The probe request fails. The cooldown period MUST be doubled (up to a maximum of 300 seconds).
Circuit Breaker Registration and Discovery Agents MUST expose circuit breaker state at a well-known endpoint:
Response:
ECT Integration Each circuit breaker state change MUST produce an ECT node:
Failure Domain Isolation
Blast Radius Containment Strategies Agents MUST implement the following containment strategies: Request rejection at the boundary: When a circuit breaker opens, the agent MUST return a structured error to its callers indicating that the downstream dependency is unavailable, rather than propagating the failure. Timeout enforcement: Agents MUST enforce timeouts on all downstream requests. The timeout MUST be shorter than the caller's timeout to prevent timeout cascades. Graceful degradation: When a non-critical downstream agent is unavailable, agents SHOULD continue operating with reduced functionality rather than failing entirely.
Domain Boundary Enforcement Failure domains are defined by the workflow topology in the ECT DAG. Each workflow (identified by the wid claim) constitutes a failure domain. Cross-workflow failures MUST be escalated through the HITL mechanism rather than propagating automatically. Agents at domain boundaries MUST: Validate all incoming requests against the circuit breaker state of their downstream dependencies before accepting work. Emit a "circuit_breaker_open" ECT when rejecting work due to downstream unavailability. Report domain health status via the circuits endpoint.
Bulkhead Patterns for Agent Pools When multiple workflows share a common agent pool, the pool MUST implement bulkhead isolation: Connection limits: Each workflow MUST have a maximum number of concurrent connections to the shared agent pool. Queue isolation: Each workflow's requests MUST be queued independently, preventing one workflow's backlog from blocking others. Resource quotas: Shared agent pools SHOULD enforce per-workflow resource quotas (CPU, memory, request rate).
Cascade Detection
Detection Signals Agents MUST monitor the following signals for cascade detection:
Error Rate:
The ratio of failed requests to total requests over a sliding window. An error rate exceeding the circuit breaker threshold indicates a potential cascade.
Latency Spike:
A sudden increase in response latency (e.g., p99 latency exceeding 3x the baseline) indicates downstream congestion or failure. Agents SHOULD track latency baselines using exponentially weighted moving averages.
Resource Exhaustion:
Thread pool saturation, connection pool exhaustion, or memory pressure above configured thresholds indicates that a cascade is consuming resources.
Propagation Tracking via ECT DAG Analysis Orchestrators SHOULD analyze the ECT DAG to detect cascading patterns: Error clustering: Multiple "circuit_breaker_open" ECTs referencing the same downstream agent within a short window indicate a shared dependency failure. Depth-first propagation: Errors propagating along par chains in the DAG indicate a synchronous cascade. Breadth-first propagation: Multiple sibling nodes in the DAG failing concurrently indicate a shared infrastructure failure.
Alert Format and Escalation When cascade detection identifies a propagating failure, the detecting agent MUST emit a cascade alert ECT:
Cascade alerts with more than 3 affected agents SHOULD trigger HITL escalation per .
Real-Time Rollback
Rollback Model Rollback reverses the effects of agent actions by walking the ECT DAG backwards from the point of failure to the nearest valid recovery point.
Walking the ECT DAG Backwards The rollback process follows par references in reverse: Identify the failing ECT node. Find the checkpoint ECT associated with the failing action (referenced via par). Follow par references backwards to identify all downstream actions that were caused by the checkpointed action. Issue rollback requests to each affected agent in reverse topological order.
Compensating Actions vs State Restoration Rollback can be performed through two mechanisms:
State Restoration:
The agent restores its state from the checkpoint snapshot. This is the preferred mechanism when the checkpoint contains a complete state snapshot (verified via out_hash).
Compensating Action:
When state restoration is not possible (e.g., the action involved an external API call), the agent executes a compensating action that semantically reverses the original action. Compensating actions MUST be recorded as ECT nodes with exec_act value "compensate".
Rollback Scope Rollback can be scoped to three levels:
Single Agent:
Only the specified agent's checkpoint is rolled back. No downstream propagation occurs.
Sub-DAG:
The checkpoint and all downstream checkpoints in the sub-DAG are rolled back. This is the default when cascade is true.
Full Workflow:
All checkpoints in the workflow are rolled back and the workflow is terminated. This requires Rollback Coordinator authorization.
Checkpoint Protocol
Checkpoint Creation An agent MUST create a checkpoint ECT before any consequential action. An action is consequential if it modifies external state (network configuration, database records, API calls with side effects). A checkpoint is an ECT with: exec_act: "checkpoint" par: the ECT of the action being checkpointed out_hash: SHA-256 hash of the agent's state snapshot
The cascade.reversible field MUST be present. If false, the agent declares that this action cannot be automatically undone and rollback requests MUST be escalated to a human operator via the HITL mechanism .
Checkpoint Storage and Retrieval Checkpoint ECTs MUST be stored for at least the duration specified by cascade.ttl. Agents MUST store checkpoints in durable storage that survives agent restarts. Agents MUST expose a checkpoint retrieval endpoint:
The response MUST include the checkpoint ECT and its verification status (whether out_hash matches the current stored state snapshot).
Checkpoint Verification Before executing a rollback, the agent MUST verify the checkpoint integrity: Retrieve the checkpoint ECT. Verify the ECT signature chain (L2/L3). Verify that the stored state snapshot matches out_hash. Verify that the checkpoint has not expired (cascade.ttl). If verification fails, the agent MUST reject the rollback request and emit an error ECT.
Distributed Rollback Coordination
Rollback Coordinator Role For rollbacks spanning multiple agents (sub-DAG or full workflow scope), a Rollback Coordinator MUST be designated. The coordinator is typically the orchestrator or the agent that initiated the workflow. The coordinator is responsible for: Computing the blast radius by traversing the ECT DAG. Determining rollback order (reverse topological sort). Issuing rollback requests to each affected agent. Tracking rollback progress and handling failures. Emitting the final rollback completion ECT.
Two-Phase Rollback Protocol Distributed rollback follows a two-phase protocol: Phase 1: Prepare The coordinator sends a prepare request to each affected agent:
{ "rollback_id": "urn:uuid:...", "checkpoint_id": "ckpt-uuid", "scope": "sub_dag" } ]]>
Each agent MUST respond with either: "prepared": The agent has verified its checkpoint and is ready to roll back. "cannot_prepare": The agent cannot roll back (e.g., checkpoint expired, irreversible action). Phase 2: Execute If all agents respond "prepared", the coordinator sends execute requests in reverse topological order:
{ "rollback_id": "urn:uuid:...", "checkpoint_id": "ckpt-uuid", "phase": "execute" } ]]>
If any agent responds "cannot_prepare" in Phase 1, the coordinator MUST either: Proceed with partial rollback (if the unprepared agent is not on the critical path), or Abort the rollback and escalate to HITL.
Partial Rollback Handling When a distributed rollback cannot be completed fully, the coordinator MUST: Roll back all agents that responded "prepared". Record the partial rollback result in the ECT DAG. Emit an ECT with exec_act value "rollback_complete" and cascade.status set to "partial". Include the list of agents that could not be rolled back in the cascade.failed_agents extension claim.
Conflict Resolution During Concurrent Rollbacks When multiple rollback requests target overlapping portions of the ECT DAG: The rollback with the broader scope takes precedence (full workflow > sub-DAG > single agent). If scopes are equal, the earlier rollback request (by timestamp) takes precedence. The losing rollback request MUST be rejected with an error indicating the conflicting rollback ID. Agents MUST implement idempotent rollback: receiving the same rollback_id twice MUST return the same result without re-executing the rollback.
Rollback Evidence
ECT Nodes for Rollback Actions Each rollback action MUST produce ECT nodes for audit:
Rollback Start:
exec_act: "rollback_start", par references the error ECT that triggered the rollback.
Rollback Complete:
exec_act: "rollback_complete", par references the rollback start ECT.
Rollback Audit Trail The complete rollback audit trail is captured in the ECT DAG:
Status values for individual agent rollbacks: completed, partial, escalated, failed.
ECT Integration This document defines the following new exec_act values for use in ECT nodes : exec_act Value Description circuit_breaker_open Circuit breaker transitioned to OPEN state circuit_breaker_close Circuit breaker transitioned to CLOSED state checkpoint State snapshot before consequential action rollback_start Rollback initiated for a checkpoint rollback_complete Rollback finished (with status) compensate Compensating action executed in lieu of state restoration cascade_detected Cascading failure pattern detected This document defines the following new ext claims for failure context: Claim Type Description cascade.downstream_agent string SPIFFE ID of the downstream agent cascade.error_rate number Error rate that triggered the circuit breaker cascade.window_s number Sliding window duration in seconds cascade.cooldown_s number Cooldown duration in seconds cascade.reversible boolean Whether the checkpointed action can be undone cascade.rollback_uri string URI for rollback requests cascade.target string Target system of the checkpointed action cascade.ttl number Checkpoint time-to-live in seconds cascade.rollback_id string Unique identifier for a rollback operation cascade.checkpoint_id string JTI of the checkpoint being rolled back cascade.scope string Rollback scope: single, sub_dag, full_workflow cascade.status string Rollback result status cascade.reason string Human-readable reason for the action cascade.pattern string Detected cascade pattern type cascade.affected_agents number Count of agents affected by cascade cascade.blast_radius array SPIFFE IDs of affected agents cascade.cascaded array Per-agent rollback results cascade.failed_agents array Agents that could not be rolled back cascade.state_hash_before string State hash before rollback cascade.state_hash_after string State hash after rollback cascade.description string Human-readable description
Security Considerations
Rollback Weaponization Malicious agents could attempt to force unnecessary rollbacks to disrupt workflows. Mitigations: Rollback requests MUST be authenticated via the ECT signature chain. Only agents whose ECTs appear in the same workflow DAG (identified by wid) are authorized to request rollback. Rollback requests from outside the originating workflow MUST be rejected with HTTP 403. Agents SHOULD implement rate limiting on rollback requests to prevent denial-of-service through rollback flooding. The two-phase rollback protocol provides a prepare phase where agents can validate the rollback request before committing.
Circuit Breaker Manipulation An adversary could attempt to manipulate circuit breaker state to either prevent legitimate circuit breaking or force unnecessary circuit breaks: False error injection: A malicious agent could emit false error ECTs to trigger circuit breakers. At L2/L3 , ECT signatures prevent forgery. Agents SHOULD verify that error ECTs reference valid par values within their own workflow DAG. Circuit breaker suppression: An adversary could attempt to reset circuit breakers by sending successful probe responses. Agents MUST only accept probe responses from the actual downstream agent (verified via ECT identity binding). Status endpoint abuse: The /.well-known/cascade/circuits endpoint reveals system health topology. This endpoint MUST require authentication and SHOULD be restricted to agents within the same administrative domain.
Checkpoint Integrity Checkpoint state snapshots contain sensitive system state. Agents MUST: Encrypt stored checkpoint state at rest. Reference checkpoint state via out_hash only in ECTs; MUST NOT include checkpoint contents in ECT claims. Verify out_hash integrity before executing rollback to prevent rollback to a tampered state. Enforce checkpoint storage quotas to prevent checkpoint flooding attacks. Purge expired checkpoints (past cascade.ttl).
IANA Considerations
Registration of exec_act Values This document requests registration of the following exec_act values in the ECT exec_act registry: Value Description Reference circuit_breaker_open Circuit breaker transitioned to OPEN This document circuit_breaker_close Circuit breaker transitioned to CLOSED This document checkpoint State snapshot before consequential action This document rollback_start Rollback operation initiated This document rollback_complete Rollback operation finished This document compensate Compensating action executed This document cascade_detected Cascading failure pattern detected This document
Registration of ext Claims This document requests registration of the ext claims listed in in the ECT extension claims registry. All claims use the cascade. namespace prefix.
Well-Known URI Registration This document requests registration of the following well-known URI suffixes per : URI Suffix Description Reference cascade/circuits Circuit breaker status This document cascade/rollback Rollback request endpoint This document cascade/rollback/prepare Rollback prepare endpoint This document cascade/checkpoints Checkpoint retrieval This document
Key words for use in RFCs to Indicate Requirement Levels In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements. Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings. JSON Web Token (JWT) JSON Web Token (JWT) is a compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted. JSON Web Signature (JWS) JSON Web Signature (JWS) represents content secured with digital signatures or Message Authentication Codes (MACs) using JSON-based data structures. Cryptographic algorithms and identifiers for use with this specification are described in the separate JSON Web Algorithms (JWA) specification and an IANA registry defined by that specification. Related encryption capabilities are described in the separate JSON Web Encryption (JWE) specification. HTTP Semantics The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document describes the overall architecture of HTTP, establishes common terminology, and defines aspects of the protocol that are shared by all versions. In this definition are core protocol elements, extensibility mechanisms, and the "http" and "https" Uniform Resource Identifier (URI) schemes. This document updates RFC 3864 and obsoletes RFCs 2818, 7231, 7232, 7233, 7235, 7538, 7615, 7694, and portions of 7230. Execution Context Tokens for Distributed Agentic Workflows Agent Context Policy Token: DAG Delegation with Human Override Gap Analysis of IETF Standards for Autonomous AI Agent Networking
Acknowledgments This document absorbs and supersedes concepts from the earlier Agent Error Recovery and Rollback (AERR) and Agent Task DAG (ATD) proposals. It builds on the Execution Context Token specification for DAG-based audit trails and the Agent Context Policy Token for HITL escalation of irreversible actions. The circuit breaker pattern is adapted from microservice architecture best practices.