{% extends "base.html" %} {% set active_page = "about" %} {% block title %}About — IETF Draft Analyzer{% endblock %} {% block content %}
A research tool for tracking, categorizing, rating, and mapping standardization documents on AI and agent-related topics across six standards bodies: IETF, ISO/IEC, ITU-T, ETSI, NIST, and W3C. It uses Claude for analysis and rating, Ollama for embeddings, and SQLite for storage.
The dashboard provides interactive visualizations of the standardization landscape, including category breakdowns, rating distributions, author networks, extracted ideas, and gap analysis — answering the question: Where is the AI agent standards race heading, and what's missing?
IETF drafts are discovered via the IETF Datatracker API by searching abstracts for the keywords below (only drafts since {{ fetch_since }}). ISO, ITU-T, ETSI, NIST, and W3C documents are sourced from their respective public catalogs using related search terms.
1. Fetch — Query Datatracker API for each keyword, deduplicate by draft name, download full text.
2. Rate — Claude rates each draft on 5 dimensions (novelty, maturity, overlap, momentum, relevance) from 1–5, with per-dimension explanations.
3. Categorize — Claude assigns one or more topic categories (e.g., "A2A protocols", "Agent identity/auth").
4. Extract Ideas — Claude extracts distinct technical ideas from each draft, with novelty scores.
5. Embed — Ollama generates vector embeddings for similarity analysis and clustering.
6. Author Network — Author and affiliation data fetched from Datatracker to build collaboration graphs.
7. Gap Analysis — Claude identifies areas where no existing draft adequately addresses a need.
Note on keyword selection: Keywords determine which drafts are included. Broad terms like "agent" and "autonomous" cast a wide net (catching some tangentially related drafts), while specific terms like "ai-agent" and "agentic" target the core AI agent space. The false-positive flag in ratings helps filter out irrelevant matches. Suggestions for additional keywords are welcome.
Each draft is rated by Claude AI on five dimensions, scored from 1 (lowest) to 5 (highest):
| Dimension | What it measures |
|---|---|
| Novelty | Originality of contribution. Does it introduce genuinely new ideas? |
| Maturity | Completeness of the specification. Ready for implementation? |
| Overlap | Duplication with other drafts. High = redundant. Inverted in composite score. |
| Momentum | Activity level. Revisions, WG adoption, multi-org authorship. |
| Relevance | How directly related to AI agent infrastructure. |
Composite score = (novelty + maturity + (5 - overlap) + momentum + relevance) / 5. Overlap is inverted so lower overlap contributes positively.