Critical fixes:
- Fix rating clamp range 1-10 → 1-5 (actual scale)
- Add `ietf ideas convergence` command (SequenceMatcher at 0.75 threshold)
- Fix "628 cross-org ideas" → 130 (verified from current DB) across 8 files
Security fixes:
- Sanitize FTS5 query input (strip special chars + boolean operators)
- Add rate limiting (10 req/min/IP) on Claude-calling endpoints
- Change <path:name> → <string:name> on draft routes
Codebase fixes:
- Add Database context manager (__enter__/__exit__)
- Wire false_positive filtering into queries (exclude by default in web UI)
- Fix Post 3 arithmetic ("~300" → "~409" distinct proposals)
Content & licensing:
- Add MIT LICENSE file
- Add IPR/FRAND notes (BCP 79, RFC 8179) to Posts 03 and 07
- Qualify "4:1 safety ratio" with monthly variation in 6 remaining files
- Add "Data as of March 2026" freeze-date headers to all 10 blog posts
- Hedge causal language in Post 04
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
93 lines
3.8 KiB
Markdown
93 lines
3.8 KiB
Markdown
# IETF Draft Analyzer — Project Instructions
|
|
|
|
## What This Is
|
|
|
|
Python CLI tool (`ietf`) to track, categorize, rate, and map IETF Internet-Drafts on AI/agent topics. 361 drafts, 403 authors, 1,262 ideas, 12 gaps. Uses Claude for analysis, Ollama for embeddings, SQLite for storage.
|
|
|
|
## Key Paths
|
|
|
|
- Source: `src/ietf_analyzer/`
|
|
- Database: `data/drafts.db` (NOT `data/ietf_drafts.db`)
|
|
- Reports: `data/reports/`
|
|
- Blog series: `data/reports/blog-series/`
|
|
- Agent definitions: `.claude/agents/`
|
|
- Team prompt: `scripts/agent-team-prompt.md`
|
|
- Scripts: `scripts/`
|
|
|
|
## Development Journal
|
|
|
|
**Every agent and every session MUST log development milestones to `data/reports/dev-journal.md`.**
|
|
|
|
This journal serves two purposes:
|
|
1. Track progress across sessions so nothing gets lost
|
|
2. Source material for the meta blog post about using Claude agent teams to build this project
|
|
|
|
### What to Log
|
|
|
|
Append entries in this format:
|
|
|
|
```markdown
|
|
### [DATE] [AGENT/SESSION] — [SHORT TITLE]
|
|
|
|
**What**: [What was done — features built, analyses run, posts written]
|
|
**Why**: [The reasoning or decision behind it]
|
|
**Result**: [Outcome, key numbers, links to artifacts]
|
|
**Surprise**: [Optional — anything unexpected, a lesson learned, a tool limitation hit]
|
|
**Cost**: [Optional — API tokens, time taken, model used]
|
|
```
|
|
|
|
### Examples of What to Log
|
|
|
|
- Pipeline runs (how many drafts processed, cost, any failures)
|
|
- New features implemented (what, why, how it changed the analysis)
|
|
- Blog posts drafted or revised (key editorial decisions)
|
|
- Architectural decisions (why we structured something a certain way)
|
|
- Agent coordination moments (when one agent's output changed another's direction)
|
|
- Surprises in the data (unexpected findings that shifted the narrative)
|
|
- Tool/infra issues (things that broke, workarounds found)
|
|
|
|
### What NOT to Log
|
|
|
|
- Routine file reads or searches
|
|
- Minor formatting fixes
|
|
- Anything already captured in git commits
|
|
|
|
## Agent Team Conventions
|
|
|
|
When working as a team:
|
|
|
|
1. **Architect** designs the narrative arc and reviews everything for coherence
|
|
2. **Analyst** runs the pipeline, queries the DB, provides data packages
|
|
3. **Coder** implements new features following existing patterns (Click CLI, SQLite, rich output)
|
|
4. **Writer** produces the blog series from data packages and architectural guidance
|
|
|
|
**Always launch agents in parallel when possible.** If agents have independent tasks (e.g., Analyst querying data while Writer drafts from existing material, or Coder implementing features that don't depend on each other), spawn them concurrently in a single message rather than sequentially. Only run agents sequentially when one genuinely depends on another's output.
|
|
|
|
All agents should:
|
|
- Read `scripts/agent-team-prompt.md` for the full brief
|
|
- Log milestones to `data/reports/dev-journal.md`
|
|
- Write blog posts to `data/reports/blog-series/`
|
|
- Save reusable scripts to `scripts/`
|
|
- Follow existing code patterns (don't over-engineer)
|
|
|
|
## Blog Series
|
|
|
|
7 posts planned in `data/reports/blog-series/` (01 through 07), plus:
|
|
- **Post 8: "Agents Building the Agent Analysis"** — Meta post about using Claude Code agent teams to analyze and write about IETF agent standards. The dev-journal.md is the source material for this post.
|
|
|
|
## Code Conventions
|
|
|
|
- CLI: Click commands in `cli.py` with `@click.option()` decorators
|
|
- DB: Tables in `db.py` `ensure_tables()`, queries as methods on `DraftDB`
|
|
- Reports: Report types in `reports.py` `generate_report()`
|
|
- Always cache Claude API calls via `llm_cache` table
|
|
- Use `rich` for console output
|
|
- Save multi-step workflows as scripts in `scripts/`
|
|
|
|
## Current Status (2026-03-03)
|
|
|
|
- v0.2.0, 361 drafts (101 new, unprocessed)
|
|
- 101 new drafts need: analyze, authors, ideas, embed, gaps
|
|
- Blog series: planned, not yet written
|
|
- Agent team: defined in `.claude/agents/`, ready to launch
|