]> Agent Behavioral Verification and Performance Benchmarking Independent Researcher
ietf@nennemann.de
OPS NMOP This document defines protocols for runtime verification that deployed AI agents behave according to their declared policies. It also specifies standardized metrics and a framework for benchmarking agent performance across implementations. Behavioral Evidence Tokens (BETs) extend the Execution Context Token architecture to provide cryptographically verifiable proof of policy compliance. Performance profiles enable objective comparison of agent capabilities.
Introduction Autonomous AI agents increasingly operate in networked environments where they make decisions, invoke tools, and delegate tasks to other agents. Operators and relying parties need assurance that these agents behave according to their declared policies at runtime, not merely at deployment time. identifies two critical gaps in the current standards landscape: Gap 1 (Behavioral Verification): Agents declare policies in their Execution Context Tokens but no standardized mechanism exists to verify that runtime behavior matches those declarations. Gap 11 (Performance Benchmarking): No standardized way exists to compare agent implementations objectively across dimensions such as task completion, latency, accuracy, and safety compliance. This document addresses both gaps by defining: A behavioral verification architecture aligned with the Remote Attestation Procedures (RATS) framework . Behavioral Evidence Tokens (BETs) that extend the Execution Context Token (ECT) with runtime compliance claims. A performance benchmarking framework with standard metrics, benchmark profiles, and an execution protocol.
Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 when, and only when, they appear in all capitals, as shown here. The following terms are used in this document:
Behavioral Attestation:
The process of generating verifiable evidence that an agent's runtime actions conform to its declared policies.
Policy-Behavior Binding:
A formal linkage between a declared policy in an agent's ECT and observable runtime actions that demonstrate compliance with that policy.
Behavioral Evidence Token (BET):
A signed token containing claims about an agent's observed runtime behavior relative to its declared policies. BETs extend the ECT architecture.
Runtime Monitor:
A component that observes agent actions and collects evidence for behavioral attestation.
Benchmark Suite:
A collection of standardized test scenarios designed to evaluate agent performance across defined metrics.
Performance Profile:
A structured record of benchmark results for a specific agent implementation.
Behavioral Verification Architecture
Verification Model Overview The behavioral verification architecture aligns with the RATS roles of Attester, Verifier, and Relying Party. A Runtime Monitor collects evidence of agent actions and produces Behavioral Evidence Tokens.
| Runtime | | (Attester) |actions| Monitor | +-------------+ +----+----+ | evidence | +----v----+ | BET | | Creator | +----+----+ | BET | +---------v---------+ | Verifier | | (Policy Engine) | +---------+---------+ | attestation result | +---------v---------+ | Relying Party | | (Orchestrator / | | Operator) | +-------------------+ ]]>
The architecture supports two modes of operation: Continuous Monitoring: The Runtime Monitor observes all agent actions in real time and generates BETs at configurable intervals or upon policy-relevant events. Point-in-Time Attestation: A Verifier requests behavioral evidence for a specific time window, and the Monitor assembles a BET covering that period.
Policy-Behavior Binding A Policy-Behavior Binding declares the expected behaviors associated with a policy and the observable actions that constitute compliance. The binding is expressed as a JSON object:
Each binding MUST include: policy_id: A URI identifying the policy. expected_behaviors: An array of behavior descriptors. evaluation_mode: Either "continuous" or "on_demand". Each behavior descriptor MUST include: behavior_id: A unique identifier. observable_actions: Action types the monitor MUST observe. compliance_criteria: The conditions under which the behavior is considered compliant.
Behavioral Evidence Tokens (BET) A Behavioral Evidence Token is a JSON Web Token (JWT) signed using JSON Web Signature (JWS) . BETs extend the ECT claim set with behavioral verification claims. The following new claims are defined:
bhv_policy:
REQUIRED. A URI reference to the policy being verified.
bhv_result:
REQUIRED. The verification result. One of "pass", "fail", or "partial".
bhv_evidence:
REQUIRED. A base64url-encoded hash (SHA-256) of the collected observable actions during the observation window.
bhv_window:
REQUIRED. A JSON object with start and end fields containing NumericDate values (as defined in ) representing the observation period.
bhv_details:
OPTIONAL. An array of per-behavior results with behavior_id and individual result values.
Example BET payload:
BET Lifecycle The lifecycle of a Behavioral Evidence Token consists of three phases: Creation: The Runtime Monitor collects evidence of agent actions, evaluates them against the Policy-Behavior Binding, and constructs a BET with the appropriate claims. The BET is signed by the Monitor's key. Submission: The signed BET is submitted to the Verifier. Submission MAY occur via a push model (Monitor sends to Verifier) or a pull model (Verifier requests from Monitor). Verification: The Verifier validates the BET signature, checks the claims against its reference policies, and produces an attestation result for the Relying Party.
Runtime Monitoring Protocol
Monitor Placement Runtime Monitors MAY be deployed in one of three configurations:
Inline:
The Monitor intercepts all agent communications as a proxy. This provides complete visibility but adds latency.
Sidecar:
The Monitor runs alongside the agent process and receives copies of all actions via a local interface. This minimizes latency while maintaining visibility.
External:
The Monitor operates as a separate service that receives action logs asynchronously. This provides the least overhead but may miss real-time events.
Observation Collection The Monitor MUST maintain a time-ordered log of observed actions. Each log entry MUST contain: Timestamp (NumericDate) Action type Action target (URI) Action parameters (opaque to the Monitor) Agent identifier
Evidence Assembly When assembling evidence for a BET, the Monitor MUST: Select all log entries within the observation window. Compute a SHA-256 hash over the canonical JSON serialization of the selected entries. Evaluate each entry against the applicable Policy-Behavior Bindings. Determine the aggregate bhv_result.
Anomaly Detection Signaling When the Monitor detects behavior that violates a Policy-Behavior Binding, it MUST: Generate a BET with bhv_result set to "fail" or "partial". Signal the anomaly to the Verifier immediately, regardless of the configured reporting interval. Optionally signal the agent's orchestrator to enable corrective action.
Performance Benchmarking Framework
Standard Metrics The following metrics are defined for agent performance benchmarking:
Task Completion Rate (TCR):
The ratio of successfully completed tasks to total tasks attempted. Unit: percentage (%). Measured over a complete benchmark suite run.
Task Latency (TL):
The time elapsed from task assignment to task completion. Unit: milliseconds (ms). Reported as p50, p95, and p99 percentiles.
Task Accuracy (TA):
The degree to which task outputs match expected results. Unit: percentage (%). Measured using benchmark-specific evaluation functions.
Resource Efficiency (RE):
The computational resources consumed per task. Unit: normalized resource units (NRU). Includes CPU, memory, and network I/O.
Safety Compliance Score (SCS):
The ratio of tasks completed without safety policy violations to total tasks. Unit: percentage (%).
Delegation Success Rate (DSR):
The ratio of successful delegations to total delegation attempts. Unit: percentage (%). Applicable only to multi-agent scenarios.
Benchmark Profiles A Benchmark Profile defines a standardized set of test scenarios for a specific agent category. Profiles are expressed as JSON objects:
Predefined profiles SHOULD be registered for common agent types including: General-purpose agents Code generation agents Data analysis agents Network management agents
Benchmark Execution Protocol
Test Harness Requirements A conformant test harness MUST: Execute all scenarios in the benchmark profile in a controlled environment. Isolate agent instances from external resources not specified in the scenario. Record all metrics defined in the profile. Produce a benchmark result document.
Result Reporting Format Benchmark results MUST be reported as a JSON object containing: profile_id: The benchmark profile used. agent_id: Identifier of the tested agent. timestamp: Time of benchmark execution. results: Per-scenario metric values. aggregate: Weighted aggregate scores.
Anti-Gaming Provisions To prevent agents from gaming benchmark results, the following provisions apply: Randomized Scenarios: Test harnesses MUST randomize scenario ordering and MAY introduce minor variations in scenario parameters. Blind Evaluation: The agent under test MUST NOT have access to the expected outputs or evaluation functions. Holdback Scenarios: Benchmark profiles SHOULD include scenarios not disclosed to agent developers. Temporal Variation: Repeated benchmark runs MUST vary timing to prevent memoization attacks.
Performance Claims in ECT Agent ECTs MAY include performance attestation claims in the ext field:
perf_profile:
The benchmark profile identifier.
perf_score:
The aggregate benchmark score.
perf_timestamp:
The time of the benchmark execution.
perf_harness:
Identifier of the test harness that produced the results.
These claims allow relying parties to evaluate agent capability before delegation.
Integration with ECT Behavioral Evidence Tokens integrate into the ECT DAG defined in as follows: Each BET references the ECT of the agent whose behavior was verified via the sub claim. BETs are attached as child nodes in the ECT DAG, linked to the agent's execution node. When an agent delegates to a sub-agent, the delegating agent's BET chain includes evidence covering the delegation decision. Verifiers traversing the DAG can inspect BETs at each node to assess behavioral compliance across the entire execution chain.
| ECT | | Agent A | | Agent B | +----+-----+ +----+-----+ | | +----v-----+ +----v-----+ | BET | | BET | | Agent A | | Agent B | +----------+ +----------+ ]]>
This structure enables end-to-end behavioral verification across multi-agent workflows.
Security Considerations
Adversarial Behavior Agents MAY attempt to behave correctly only when they detect monitoring. Mitigations include: Unpredictable monitoring intervals Covert observation modes where the agent is not informed of monitor presence Cross-referencing BETs with external audit logs
Monitor Compromise A compromised Runtime Monitor could produce fraudulent BETs. Mitigations include: Monitor attestation using RATS Multiple independent monitors with cross-validation Transparency logs for BETs, aligned with SCITT
Benchmark Manipulation Agents or their operators MAY attempt to manipulate benchmark results. The anti-gaming provisions in Section 4.3.3 address this risk. Additionally: Benchmark harnesses MUST be operated by independent parties. Results MUST be signed by the harness operator. Benchmark profiles MUST be versioned and immutable once published.
Privacy of Behavioral Evidence BETs contain information about agent actions that may be sensitive. Implementations MUST: Minimize the detail in bhv_evidence to what is necessary for verification. Support selective disclosure where possible. Protect BETs in transit using TLS (). Define retention policies for behavioral evidence.
IANA Considerations
ECT Extension Claim Keys This document requests registration of the following claim keys in the ECT ext claims registry: Claim Key Description bhv_policy Policy URI reference bhv_result Verification result bhv_evidence Observed actions hash bhv_window Observation period bhv_details Per-behavior results perf_profile Benchmark profile ID perf_score Aggregate benchmark score perf_timestamp Benchmark execution time perf_harness Test harness identifier
Benchmark Profile Media Type This document requests registration of the following media type: Type name: application Subtype name: agent-benchmark-profile+json Required parameters: N/A Optional parameters: N/A Encoding considerations: binary (UTF-8 JSON) Security considerations: See Section 6
Key words for use in RFCs to Indicate Requirement Levels In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements. Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings. Remote ATtestation procedureS (RATS) Architecture In network protocol exchanges, it is often useful for one end of a communication to know whether the other end is in an intended operating state. This document provides an architectural overview of the entities involved that make such tests possible through the process of generating, conveying, and evaluating evidentiary Claims. It provides a model that is neutral toward processor architectures, the content of Claims, and protocols. JSON Web Token (JWT) JSON Web Token (JWT) is a compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted. JSON Web Signature (JWS) JSON Web Signature (JWS) represents content secured with digital signatures or Message Authentication Codes (MACs) using JSON-based data structures. Cryptographic algorithms and identifiers for use with this specification are described in the separate JSON Web Algorithms (JWA) specification and an IANA registry defined by that specification. Related encryption capabilities are described in the separate JSON Web Encryption (JWE) specification. Execution Context Tokens for Distributed Agentic Workflows Agent Context Policy Token: DAG Delegation with Human Override HTTP Semantics The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document describes the overall architecture of HTTP, establishes common terminology, and defines aspects of the protocol that are shared by all versions. In this definition are core protocol elements, extensibility mechanisms, and the "http" and "https" Uniform Resource Identifier (URI) schemes. This document updates RFC 3864 and obsoletes RFCs 2818, 7231, 7232, 7233, 7235, 7538, 7615, 7694, and portions of 7230. Gap Analysis for Autonomous Agent Protocols An Architecture for Trustworthy and Transparent Digital Supply Chains Fraunhofer SIT Microsoft Research Microsoft Research ARM Traceability in supply chains is a growing security concern. While verifiable data structures have addressed specific issues, such as equivocation over digital certificates, they lack a universal architecture for all supply chains. This document defines such an architecture for single-issuer signed statement transparency. It ensures extensibility, interoperability between different transparency services, and compliance with various auditing procedures and regulatory requirements.
Acknowledgments The author thanks the contributors to the NMOP working group for discussions on agent operational requirements.