---
name: review
description: |
Review-only mode. Run Guardian + optional reviewers on an existing diff or branch,
without any Plan/Do orchestration. The highest-ROI mode for catching design-level bugs.
User: "af-review"
User: "Review the last commit"
User: "af-review --reviewers guardian,skeptic"
---
# ArcheFlow Review Mode
Run reviewers on existing code changes without orchestrating implementation.
This is the most cost-effective mode — it delivers Guardian's error-path analysis
without the Maker overhead.
## When to Use
- After you've implemented something and want a quality check
- On a PR or branch before merging
- When the sprint runner flags a task as DONE_WITH_CONCERNS
- As a pre-commit quality gate for complex changes
## Invocation
```
af-review # Review uncommitted changes
af-review --branch feat/batch-api # Review branch diff against main
af-review --commit HEAD~3..HEAD # Review last 3 commits
af-review --reviewers guardian,skeptic,sage # Choose reviewers (default: guardian)
af-review --evidence # Enable evidence-gating (stricter)
```
---
## Execution
### Step 1: Get the Diff
Use `lib/archeflow-review.sh` to extract the diff and stats:
```bash
# Uncommitted changes (default)
DIFF=$(bash lib/archeflow-review.sh)
# Branch diff against main
DIFF=$(bash lib/archeflow-review.sh --branch feat/batch-api)
# Commit range
DIFF=$(bash lib/archeflow-review.sh --commit HEAD~3..HEAD)
# Override base branch
DIFF=$(bash lib/archeflow-review.sh --branch feat/x --base develop)
# Stats only (no diff output)
bash lib/archeflow-review.sh --stat-only
```
The script prints the diff to stdout and stats to stderr. It exits 1 if the diff
is empty (nothing to review). For large diffs (>500 lines), it warns on stderr.
### Step 2: Spawn Reviewers
Default: Guardian only (fastest, highest ROI).
With `--reviewers`: spawn requested reviewers in parallel.
**Guardian** (always first):
```
Agent(
description: "Guardian: review changes for ",
prompt: "You are the GUARDIAN archetype — security and risk reviewer.
Review this diff for: security vulnerabilities, error handling gaps,
data loss scenarios, race conditions, and breaking changes.
For each finding: cite specific code (file:line), state what you tested
or observed, state what the correct behavior should be.
Diff:
STATUS: DONE | DONE_WITH_CONCERNS | NEEDS_CONTEXT | BLOCKED",
subagent_type: "code-reviewer"
)
```
**Skeptic** (if requested):
- Focus: hidden assumptions, edge cases, scalability
- Context: diff + any design docs
**Sage** (if requested):
- Focus: code quality, test coverage, maintainability
- Context: diff + surrounding code
**Trickster** (if requested):
- Focus: adversarial inputs, failure injection, chaos testing
- Context: diff only
### Step 3: Collect and Report
Parse each reviewer's output. Show findings:
```
── af-review: ───────────────────────
Reviewers: guardian, skeptic
🛡️ Guardian: 2 findings (1 HIGH, 1 MEDIUM)
[HIGH] Timeout marks variant as done — loses batch state (fanout.py:552)
[MEDIUM] No JSON error handling on corrupted state (batch.py:310)
🤔 Skeptic: 1 finding (1 INFO)
[INFO] hash() non-deterministic across processes (fanout.py:524)
Total: 3 findings (1 HIGH, 1 MEDIUM, 1 INFO)
────────────────────────────────────────────────
```
### Step 4: Evidence Gate (if --evidence)
When `--evidence` is active, apply the evidence requirements from `archeflow:check-phase`:
- Scan findings for banned phrases ("might be", "could potentially", etc.)
- Check for evidence markers (exit codes, line numbers, reproduction steps)
- Downgrade unsupported findings to INFO
---
## Integration with Sprint Runner
The sprint runner can invoke `af-review` automatically:
| Sprint trigger | Review action |
|----------------|--------------|
| Task marked DONE_WITH_CONCERNS | Run Guardian on the agent's changes |
| Task is L/XL estimate | Run Guardian + Skeptic after completion |
| Task involves security keywords | Run Guardian automatically |
| User requests | Run specified reviewers |
---
## Cost
Review-only is 60-80% cheaper than full PDCA:
- No Explorer research (~30% of PDCA cost)
- No Creator planning (~20% of PDCA cost)
- No Maker implementation (already done)
- Only reviewer token costs remain