Files
ietf-draft-analyzer/pyproject.toml
Christian Nennemann 1ec1f69bee v0.3.0: Publication-ready release with blog site, paper update, and polish
Release prep:
- Version bump to 0.3.0 (pyproject.toml, cli.py)
- Rewrite README.md with current stats (475 drafts, 713 authors, 501 ideas)
- Add CONTRIBUTING.md with dev setup and code conventions

Blog site:
- Add scripts/build-site.py (markdown → HTML with clean CSS, dark mode, nav)
- Generate static site in docs/blog/ (10 pages)
- Ready for GitHub Pages deployment

Academic paper (paper/main.tex):
- Update all counts: 474→475 drafts, 557→710 authors, 1907→462 ideas, 11→12 gaps
- Add false-positive filtering methodology (113 excluded, 361 relevant)
- Add cross-org convergence analysis (132 ideas, 33% rate)
- Add GDPR compliance gap to gap table
- Add LLM-as-judge caveats to rating methodology and limitations
- Add FIPA, IEEE P3394, W3C WoT to related work with bibliography entries
- Fix safety ratio to show monthly variation (1.5:1 to 21:1)

Pipeline:
- Fetch 1 new draft (475 total), 3 new authors (713 total)
- Fix 16 ruff lint errors across test files
- All 106 tests pass

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:54:43 +01:00

39 lines
796 B
TOML

[build-system]
requires = ["setuptools>=68.0"]
build-backend = "setuptools.build_meta"
[project]
name = "ietf-draft-analyzer"
version = "0.3.0"
description = "Track, categorize, and rate AI/agent-related IETF Internet-Drafts"
requires-python = ">=3.11"
dependencies = [
"click>=8.0",
"httpx>=0.27",
"anthropic>=0.40",
"ollama>=0.4",
"rich>=13.0",
"numpy>=1.26",
"python-dotenv>=1.0",
"plotly>=5.18",
"matplotlib>=3.8",
"seaborn>=0.13",
"scipy>=1.11",
"scikit-learn>=1.3",
"networkx>=3.2",
"markdown>=3.5",
"flask>=3.0",
]
[project.optional-dependencies]
test = ["pytest", "pytest-cov"]
[project.scripts]
ietf = "ietf_analyzer.cli:main"
[tool.setuptools.packages.find]
where = ["src"]
[tool.pytest.ini_options]
pythonpath = ["src"]