{% extends "base.html" %} {% set active_page = "about" %} {% block title %}About — IETF Draft Analyzer{% endblock %} {% block content %}

About IETF Draft Analyzer

What is this?

A tool for tracking, categorizing, rating, and mapping IETF Internet-Drafts focused on AI and agent-related topics. It uses Claude for analysis and rating, Ollama for embeddings, and SQLite for storage.

The dashboard provides interactive visualizations of the draft landscape, including category breakdowns, rating distributions, author networks, extracted ideas, and gap analysis.

Current Data

Total Drafts
{{ stats.total_drafts }}
Rated Drafts
{{ stats.rated_count }}
Authors Tracked
{{ stats.author_count }}
Ideas Extracted
{{ stats.idea_count }}
Gaps Identified
{{ stats.gap_count }}
API Tokens Used
{{ "{:,}".format(stats.input_tokens + stats.output_tokens) }}

Data Collection Methodology

Drafts are discovered by searching the IETF Datatracker API for documents whose abstract contains any of the following keywords. Only drafts submitted since {{ fetch_since }} are included.

Search Keywords

{% for kw in search_keywords %} {{ kw }} {% endfor %}

Analysis Pipeline

1. Fetch — Query Datatracker API for each keyword, deduplicate by draft name, download full text.

2. Rate — Claude rates each draft on 5 dimensions (novelty, maturity, overlap, momentum, relevance) from 1–5, with per-dimension explanations.

3. Categorize — Claude assigns one or more topic categories (e.g., "A2A protocols", "Agent identity/auth").

4. Extract Ideas — Claude extracts distinct technical ideas from each draft, with novelty scores.

5. Embed — Ollama generates vector embeddings for similarity analysis and clustering.

6. Author Network — Author and affiliation data fetched from Datatracker to build collaboration graphs.

7. Gap Analysis — Claude identifies areas where no existing draft adequately addresses a need.

Note on keyword selection: Keywords determine which drafts are included. Broad terms like "agent" and "autonomous" cast a wide net (catching some tangentially related drafts), while specific terms like "ai-agent" and "agentic" target the core AI agent space. The false-positive flag in ratings helps filter out irrelevant matches. Suggestions for additional keywords are welcome.

Scoring Methodology

Each draft is rated by Claude AI on five dimensions, scored from 1 (lowest) to 5 (highest):

Dimension What it measures
NoveltyOriginality of contribution. Does it introduce genuinely new ideas?
MaturityCompleteness of the specification. Ready for implementation?
OverlapDuplication with other drafts. High = redundant. Inverted in composite score.
MomentumActivity level. Revisions, WG adoption, multi-org authorship.
RelevanceHow directly related to AI agent infrastructure.

Composite score = (novelty + maturity + (5 - overlap) + momentum + relevance) / 5. Overlap is inverted so lower overlap contributes positively.

Tech Stack

{% endblock %}