5 Commits

Author SHA1 Message Date
130c04fa58 feat: corrective action framework + CLAUDE.md rewrite + v0.8.0 cleanup
- Extend shadow-detection with 3-layer corrective action framework:
  archetype shadows, system shadows (tunnel vision, echo chamber, etc.),
  and policy boundaries (checkpoints, budget gates, circuit breakers)
- Rewrite CLAUDE.md with proper guardrails (DO/DO NOT, skill writing rules,
  200-line max per skill, no bash pseudo-code in skills)
- Update plugin.json to v0.8.0 with consolidated 19-skill list
- Update README architecture tree and skills reference
- Update using-archeflow version string to v0.8.0 / 19 skills
- Remove 8 empty skill directories (absorbed into run skill)
2026-04-06 20:52:27 +02:00
752177528f refactor: trim act-phase skill from 371 to 140 lines
Remove duplicated routing tables, verbose JSON event examples,
writing/prose domain template (belongs in domains/colette-bridge),
--start-from section (belongs in run skill), and redundant checklist.
Consolidate three Agent() templates into one compact template.
Preserve all routing rules, decision logic, and feedback format.
2026-04-06 20:50:59 +02:00
a1667633ad Merge branch 'refactor/consolidate-check-phase-v2' into refactor/trim-secondary-skills
# Conflicts:
#	skills/colette-bridge/SKILL.md
#	skills/using-archeflow/SKILL.md
2026-04-06 20:50:31 +02:00
8837a359ac refactor: simplify memory and shadow-detection skills
Trim verbose implementation details that duplicate what the bash helper
scripts already handle. Memory skill: 278 -> 120 lines. Shadow detection
skill: 180 -> 66 lines. All essential protocols, tables, and commands
preserved; removed redundant algorithm descriptions, multiple examples,
and narrative prose.
2026-04-06 20:42:47 +02:00
af1f4e7da7 refactor: merge attention-filters into check-phase skill
Consolidate the attention-filters skill (122 lines) into check-phase,
reducing check-phase from 234 to 110 lines. Removed verbose bash code
blocks, 30-line consolidated output example, re-check protocol (belongs
in act-phase), and motivational section. Updated all references in
README, plugin.json, using-archeflow, and colette-bridge.
2026-04-06 20:41:36 +02:00
8 changed files with 408 additions and 802 deletions

View File

@@ -1,7 +1,7 @@
{
"name": "archeflow",
"description": "Multi-agent orchestration with Jungian archetypes. PDCA quality cycles, shadow detection, git worktree isolation. Zero dependencies — works with any Claude Code session.",
"version": "0.7.0",
"version": "0.8.0",
"author": {
"name": "Chris Nennemann"
},
@@ -14,12 +14,11 @@
"shadow-detection", "workflows"
],
"skills": [
"run", "orchestration", "plan-phase", "do-phase", "check-phase", "act-phase",
"shadow-detection", "attention-filters", "convergence", "artifact-routing",
"process-log", "memory", "effectiveness", "progress",
"colette-bridge", "git-integration", "multi-project",
"custom-archetypes", "workflow-design", "domains", "cost-tracking",
"templates", "autonomous-mode", "using-archeflow", "presence"
"run", "sprint", "review", "check-phase", "act-phase",
"shadow-detection", "memory", "progress", "presence",
"colette-bridge", "git-integration", "multi-project", "cost-tracking",
"custom-archetypes", "workflow-design", "domains",
"templates", "autonomous-mode", "using-archeflow"
],
"hooks": "hooks/hooks.json"
}

160
CLAUDE.md
View File

@@ -1,71 +1,119 @@
# archeflow — Multi-Agent Orchestration Plugin for Claude Code
Workspace-level orchestration: parallel agent teams across project portfolios, PDCA cycles with Jungian archetype roles, sprint runner, and post-implementation review. Installed as a Claude Code plugin.
## Tech Stack
- **Runtime:** Bash (lib scripts) + Claude Code skill system (Markdown skills)
- **No build step, no dependencies** — pure bash + markdown
- **Plugin format:** Claude Code plugin (skills/, hooks/, agents/, templates/)
## Key Commands
```bash
# Use via Claude Code slash commands:
/af-sprint # Main mode: work the queue across projects
/af-run <task> # Deep orchestration with PDCA cycles
/af-review # Post-implementation security/quality review
/af-status # Current run status
/af-init # Initialize ArcheFlow in a project
/af-score # Archetype effectiveness scores
/af-memory # Cross-run lesson memory
/af-report # Full process report
/af-fanout # Colette book fanout via agents
```
PDCA quality cycles with Jungian archetype roles, corrective action framework, sprint runner, and post-implementation review. Zero dependencies — pure Bash + Markdown.
## Architecture
```
skills/ Slash command implementations (one dir per skill)
sprint/ /af-sprint — queue-driven parallel agent runner
run/ /af-run — PDCA orchestration
skills/ Slash commands and internal protocols (one SKILL.md per dir)
run/ /af-run — self-contained PDCA orchestration (core skill)
sprint/ /af-sprint — queue-driven parallel agent dispatch
review/ /af-review — Guardian-led code review
plan-phase/ PDCA Plan phase
do-phase/ PDCA Do phase
check-phase/ PDCA Check phase
act-phase/ PDCA Act phase
check-phase/ Shared reviewer protocol (used by run + review)
act-phase/ Finding collection, fix routing, exit decisions
shadow-detection/ Corrective action framework (archetype + system + policy)
memory/ Cross-run lessons learned
cost-tracking/ Token/cost awareness
cost-tracking/ Token/cost awareness and budget enforcement
domains/ Domain detection (code, writing, research)
... ~25 skill directories
hooks/
hooks.json Hook definitions
session-start/ Auto-activation on session start
agents/ Archetype agent definitions
explorer.md Divergent thinking, research
creator.md Design, architecture
maker.md Implementation
guardian.md Security, risk, quality gates
sage.md Wisdom, patterns, trade-offs
skeptic.md Devil's advocate
trickster.md Edge cases, unconventional approaches
lib/ Bash helper scripts (git, DAG, events, progress, etc.)
colette-bridge/ Writing context loader from colette.yaml
multi-project/ Cross-repo orchestration with dependency DAG
git-integration/ Per-phase commits, branch strategy, rollback
templates/ Workflow/team bundle gallery
autonomous-mode/ Unattended session protocol
using-archeflow/ Session-start activation (auto-loaded via hook)
agents/ Archetype personality definitions (one .md per archetype)
lib/ Bash helper scripts (events, git, memory, progress, etc.)
hooks/ Session-start hook (injects using-archeflow)
templates/bundles/ Pre-configured workflow bundles
docs/ Roadmap, dogfood notes, test reports
```
## Domain Rules
## Commands
- Skills are Markdown files with frontmatter — follow existing skill format exactly
- Agents are archetype personas — maintain their distinct voice and perspective
- Dogfood observations go to `archeflow/.archeflow/memory/lessons.jsonl`
- Cost tracking: prefer cheap models for bulk ops, expensive for creative/review
- PDCA cycle order is mandatory: Plan -> Do -> Check -> Act
| Command | Purpose |
|---------|---------|
| `/af-run <task>` | PDCA orchestration with full agent cycle |
| `/af-sprint` | Work the queue across projects |
| `/af-review` | Review existing code changes |
| `/af-status` | Current/last run status |
| `/af-init` | Initialize ArcheFlow in a project |
| `/af-score` | Archetype effectiveness scores |
| `/af-memory` | Cross-run lesson memory |
| `/af-report` | Full process report |
| `/af-fanout` | Colette book fanout via agents |
## Do NOT
## Core Concepts
- Add runtime dependencies — this must stay zero-dependency
- Change archetype personalities without updating all referencing skills
- Skip the Check phase in PDCA cycles (quality gate)
- Modify hooks.json format without testing plugin reload
- Use ArcheFlow to orchestrate simple single-file tasks (overhead not justified)
### PDCA Cycle
```
Plan (Explorer + Creator) -> Do (Maker in worktree) -> Check (Guardian first, then others) -> Act (fix, merge, or cycle)
```
### Archetypes
Explorer (research), Creator (design), Maker (implement), Guardian (security), Skeptic (assumptions), Trickster (edge cases), Sage (quality). Each has a virtue and a shadow — see `shadow-detection` skill.
### Corrective Action Framework
Three layers, one escalation protocol:
- **Archetype shadows** — individual agent dysfunction
- **System shadows** — orchestration-level issues (echo chamber, tunnel vision, scope creep)
- **Policy boundaries** — operational limits (checkpoints, budgets, circuit breakers)
### Workflows
| Risk Level | Workflow | Agents |
|------------|----------|--------|
| Low | `fast` | Creator -> Maker -> Guardian |
| Medium | `standard` | Explorer + Creator -> Maker -> Guardian + Skeptic + Sage |
| High | `thorough` | Explorer + Creator -> Maker -> All 4 reviewers |
## Guardrails
### DO
- Keep skills self-contained. The `run` skill needs zero prerequisites — it was consolidated for a reason.
- Write skills as operational instructions Claude can follow, not software specifications.
- Use tables for reference data, numbered steps for protocols.
- Emit events via `./lib/archeflow-event.sh` — but never let logging block orchestration.
- Maintain the corrective action framework when adding new agent types.
- Test skill changes by running `/af-run --dry-run` and verifying the flow.
- Keep archetype personalities distinct — each agent definition in `agents/` has a specific voice.
### DO NOT
- **Add runtime dependencies.** This must stay zero-dependency (Bash + Markdown only).
- **Bloat skills back up.** The consolidation from 27 to ~15 skills was intentional. Do not create new skills for internal implementation details — inline them.
- **Write bash pseudo-code in skills.** Skills are Claude instructions, not shell scripts. Use one-liner commands or lib script references, not multi-line bash blocks.
- **Duplicate protocol definitions.** Finding format lives in `check-phase`. Routing table lives in `act-phase`. Shadow detection lives in `shadow-detection`. One source of truth per concept.
- **Skip the Check phase** in PDCA cycles. It's the quality gate.
- **Change archetype personalities** without updating all referencing skills and agent definitions.
- **Use ArcheFlow for trivial tasks.** Single-file fixes, config changes, questions — just do them directly.
- **Let skills exceed ~200 lines.** If a skill is growing past this, it probably needs splitting or the content belongs in a lib script.
### Skill Writing Rules
1. **Frontmatter**: `name` (kebab-case), `description` (one-liner + `<example>` tags for user-invocable skills)
2. **Structure**: Imperative voice. Lead with what to do, not why. Tables > prose. Steps > paragraphs.
3. **Agent templates**: Keep Agent() spawn templates concise. Include only the prompt, subagent_type, and isolation mode.
4. **Cross-references**: Use `archeflow:<skill-name>` backtick syntax to reference other skills. Avoid circular dependencies.
5. **Bash commands**: One-liners only in skills. Multi-step logic belongs in `lib/` scripts.
### Cost Awareness
- Prefer cheap models (haiku) for analytical tasks (validation, diff scoring)
- Use capable models (sonnet/opus) for creative tasks (writing, complex design)
- Budget enforcement via `cost-tracking` skill and `.archeflow/config.yaml`
- Track token spend per agent in events for post-run analysis
### Git Rules
- Signing: `git config gpg.format ssh`, key at `~/.ssh/id_ed25519_dev.pub`
- Push: `GIT_SSH_COMMAND="ssh -i /home/c/.ssh/id_ed25519_dev -o IdentitiesOnly=yes" git push origin main`
- Conventional commits: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`
- No Co-Authored-By trailers
- All work on worktree branches until explicitly merged
- Merges use `--no-ff` (individually revertable)
## Dogfooding
When using ArcheFlow to develop ArcheFlow itself:
- Log observations to `.archeflow/memory/lessons.jsonl`
- Note friction points, shadow false positives, skill gaps
- Test skill changes with `/af-run --dry-run` before committing

View File

@@ -146,61 +146,51 @@ Shadow detection is quantitative, not vibes. Explorer output exceeding 2000 word
## Skills Reference
ArcheFlow ships with 24 skills organized by function.
ArcheFlow ships with 19 skills organized by function. The `run` skill is self-contained -- no prerequisites needed.
### Core Orchestration
| Skill | Description |
|-------|-------------|
| `archeflow:run` | Automated PDCA execution loop -- single-command orchestration with `--start-from`, `--dry-run`, and cycle-back |
| `archeflow:orchestration` | Step-by-step PDCA execution guide for manual orchestration |
| `archeflow:plan-phase` | Explorer and Creator output formats and protocols |
| `archeflow:do-phase` | Maker implementation rules and worktree commit strategy |
| `archeflow:check-phase` | Shared reviewer protocols and output format |
| `archeflow:act-phase` | Post-Check decision logic: collect findings, route fixes, exit or cycle |
| `archeflow:run` | Self-contained PDCA orchestration -- Plan/Do/Check/Act with adaptation rules, pipeline strategy, and cycle-back |
| `archeflow:sprint` | Queue-driven parallel agent dispatch across projects (primary mode) |
| `archeflow:review` | Guardian-led code review on diff/branch/commit range |
| `archeflow:check-phase` | Shared reviewer protocol -- finding format, evidence requirements, attention filters |
| `archeflow:act-phase` | Finding collection, fix routing, exit decisions |
### Quality and Safety
| Skill | Description |
|-------|-------------|
| `archeflow:shadow-detection` | Quantitative dysfunction detection and automatic correction |
| `archeflow:attention-filters` | Context optimization per archetype -- each agent gets only what it needs |
| `archeflow:convergence` | Detects convergence, stalling, and oscillation in multi-cycle runs |
| `archeflow:artifact-routing` | Inter-phase artifact protocol -- naming, storage, routing, archiving |
### Process Intelligence
| Skill | Description |
|-------|-------------|
| `archeflow:process-log` | Event-sourced JSONL logging with DAG parent relationships |
| `archeflow:shadow-detection` | Corrective action framework -- archetype shadows, system shadows, policy boundaries |
| `archeflow:memory` | Cross-run memory that learns recurring findings and injects lessons |
| `archeflow:effectiveness` | Archetype scoring on signal-to-noise, fix rate, cost efficiency |
| `archeflow:progress` | Live progress file watchable from a second terminal |
### Integration
| Skill | Description |
|-------|-------------|
| `archeflow:colette-bridge` | Bridges ArcheFlow with the Colette writing platform |
| `archeflow:git-integration` | Git-per-phase commits, branch-per-run, rollback to any phase boundary |
| `archeflow:git-integration` | Per-phase commits, branch-per-run, rollback |
| `archeflow:multi-project` | Cross-repo orchestration with dependency DAG and shared budget |
| `archeflow:cost-tracking` | Budget enforcement, per-agent cost aggregation, model tier recommendations |
### Configuration
| Skill | Description |
|-------|-------------|
| `archeflow:domains` | Domain adapters for writing, research, and non-code workflows |
| `archeflow:custom-archetypes` | Create domain-specific roles (database reviewer, compliance auditor, etc.) |
| `archeflow:workflow-design` | Design custom workflows with per-phase archetype assignment and exit conditions |
| `archeflow:domains` | Domain adapters for writing, research, and other non-code workflows |
| `archeflow:cost-tracking` | Budget enforcement, per-agent cost aggregation, model tier recommendations |
| `archeflow:workflow-design` | Design custom workflows with per-phase archetype assignment |
| `archeflow:templates` | Template gallery for sharing workflows, teams, and setup bundles |
| `archeflow:autonomous-mode` | Unattended overnight sessions with progress logging and safe stopping |
| `archeflow:autonomous-mode` | Unattended sessions with corrective action checkpoints |
| `archeflow:progress` | Live progress file watchable from a second terminal |
| `archeflow:presence` | User-facing output format -- show outcomes, not mechanics |
### Meta
| Skill | Description |
|-------|-------------|
| `archeflow:using-archeflow` | Session-start skill -- activation criteria, workflow selection, quick reference |
| `archeflow:using-archeflow` | Session-start activation -- decision tree, workflow selection, commands |
## Library Scripts
@@ -341,47 +331,28 @@ archetypes: [explorer, creator, maker, guardian, db-specialist]
```
archeflow/
├── .claude-plugin/plugin.json # Plugin manifest (v0.5.0)
├── .claude-plugin/plugin.json # Plugin manifest
├── agents/ # 7 archetype personas (behavioral protocols)
│ ├── explorer.md # Plan: research and context mapping
│ ├── creator.md # Plan: solution design and proposals
── maker.md # Do: implementation in isolated worktree
├── guardian.md # Check: security and reliability review
├── skeptic.md # Check: assumption challenging
│ ├── trickster.md # Check: adversarial testing
── sage.md # Check: holistic quality review
├── skills/ # 24 behavioral skills
│ ├── run/ # Automated PDCA loop
│ ├── orchestration/ # Manual PDCA execution guide
│ ├── plan-phase/ # Plan protocols
│ ├── do-phase/ # Do protocols
│ ├── check-phase/ # Check protocols
│ ├── act-phase/ # Act phase decision logic
│ ├── shadow-detection/ # Dysfunction detection
│ ├── attention-filters/ # Context optimization
│ ├── convergence/ # Cycle convergence detection
│ ├── artifact-routing/ # Inter-phase artifact protocol
│ ├── process-log/ # Event-sourced JSONL logging
│ ├── explorer.md, creator.md # Plan phase agents
│ ├── maker.md # Do phase agent
── guardian.md, skeptic.md, # Check phase agents
trickster.md, sage.md
├── skills/ # 19 skills (consolidated from 27)
│ ├── run/ # Self-contained PDCA orchestration (core)
── sprint/ # Queue-driven parallel agent dispatch
│ ├── review/ # Guardian-led code review
│ ├── check-phase/ # Shared reviewer protocol + attention filters
│ ├── act-phase/ # Finding collection + fix routing
│ ├── shadow-detection/ # Corrective action framework (3 layers)
│ ├── memory/ # Cross-run learning
── effectiveness/ # Archetype scoring
│ ├── progress/ # Live progress file
│ ├── colette-bridge/ # Colette writing platform bridge
│ ├── git-integration/ # Per-phase git commits
│ ├── multi-project/ # Cross-repo orchestration
│ ├── custom-archetypes/ # Domain-specific roles
│ ├── workflow-design/ # Custom workflow design
│ ├── domains/ # Domain adapters
│ ├── cost-tracking/ # Budget and cost management
│ ├── templates/ # Template gallery
│ ├── autonomous-mode/ # Unattended sessions
│ └── using-archeflow/ # Session-start activation
├── lib/ # 8 shell scripts (process infrastructure)
── ... # + 12 config/integration skills
├── lib/ # 10 shell scripts (events, git, memory, etc.)
├── hooks/ # Auto-activation (SessionStart)
├── examples/ # Walkthroughs, templates, custom archetypes
└── docs/ # Roadmap, changelog
```
The flow: skills define behavioral rules (what agents should do), agents define personas (how they think), lib scripts handle tooling (event logging, git, reporting), and hooks wire it all together at session start. Events are emitted at every phase transition, forming a DAG that can be rendered, reported, or scored after the run.
Skills define behavioral rules, agents define personas, lib scripts handle tooling, hooks wire it together at session start. The `run` skill is self-contained -- it absorbed 8 previously separate skills (orchestration, plan-phase, do-phase, artifact-routing, process-log, convergence, effectiveness, attention-filters) into one 459-line operational guide.
## Philosophy

View File

@@ -1,292 +1,46 @@
---
name: act-phase
description: |
Use after the Check phase completes. Collects reviewer findings, prioritizes them, routes fixes to the right agent or tool, applies fixes systematically, and decides whether to exit or cycle.
Use after the Check phase completes. Collects reviewer findings, routes fixes, applies them, decides whether to exit or cycle.
<example>Automatically loaded during orchestration after Check phase</example>
<example>User: "Run just the act phase on existing findings"</example>
---
# Act Phase
After all reviewers complete, the Act phase turns findings into fixes and decides whether the cycle is done. This is the bridge between "what's wrong" and "what we do about it."
## Overview
Turn Check phase findings into fixes, then decide: exit or cycle.
```
Check phase output → Collect → Prioritize → Route → Fix → Verify → Exit or Cycle
Check output → Collect → Deduplicate → Route → Fix → Exit or Cycle
```
---
## Step 1: Finding Collection
## Step 1: Collect and Consolidate Findings
Parse all reviewer outputs into one consolidated findings table. Use the standardized format from the `check-phase` skill.
Parse all reviewer outputs into one table grouped by severity (CRITICAL / WARNING / INFO):
```markdown
## Findings Summary — Cycle N
### CRITICAL (must fix before next cycle)
| # | Source | Location | Category | Description | Suggested Fix |
|---|--------|----------|----------|-------------|---------------|
| 1 | guardian | src/auth/handler.ts:48 | security | Empty string bypasses validation | Add length check |
| 2 | trickster | src/api/parse.ts:92 | reliability | Null input causes crash | Guard with null check |
### WARNING (should fix)
| # | Source | Location | Category | Description | Suggested Fix |
|---|--------|----------|----------|-------------|---------------|
| 3 | sage | tests/auth.test.ts:15 | testing | Test names don't describe behavior | Rename to "should reject expired tokens" |
| 4 | guardian | src/auth/handler.ts:52 | security | Missing rate limit | Add rate limiter middleware |
### INFO (nice to have)
| # | Source | Location | Category | Description | Suggested Fix |
|---|--------|----------|----------|-------------|---------------|
| 5 | skeptic | src/auth/handler.ts:30 | design | Consider caching validated tokens | Add TTL cache |
```
### Deduplication
Before listing findings, deduplicate across reviewers (same rule as `check-phase`):
- Same file + same category + similar description = one finding
- Use the higher severity
- Credit all sources: `guardian + skeptic`
- Don't double-count in severity tallies
Same file + same category + similar description = one finding. Use the higher severity, credit all sources (e.g. `guardian + skeptic`).
### Cross-Cycle Tracking
### Cross-Cycle Tracking (cycle > 1)
Compare against prior cycle findings (if cycle > 1):
- **Resolved:** Finding from cycle N-1 no longer present mark resolved, do not re-raise
- **Persisting:** Same location + category still present → increment `cycle_count`
- **New:** Finding not seen before → add with `cycle_count: 1`
Compare against prior cycle findings:
- **Resolved** no longer present, mark resolved, do not re-raise
- **Persisting** — same location + category, increment `cycle_count`
- **New** — first appearance, `cycle_count: 1`
If a finding persists for 2+ consecutive cycles, flag for user escalation (see Step 5).
Finding persisting 2+ cycles = flag for escalation (see Step 4).
---
## Step 2: Fix Routing
Not all findings are fixed the same way. Route each finding based on its nature:
| Category | Fix Route | Rationale |
|----------|-----------|-----------|
| `security` | Spawn Maker with targeted instructions | Security fixes need tested code changes |
| `reliability` | Spawn Maker with targeted instructions | Same — code-level fix with test |
| `breaking-change` | Route to Creator in next cycle | Design decision needed |
| `design` | Route to Creator in next cycle | Architecture change, not a patch |
| `dependency` | Spawn Maker with targeted instructions | Package update or removal |
| `quality` | Spawn Maker or apply directly | Depends on scope (see below) |
| `testing` | Spawn Maker with targeted instructions | Tests need to be written and run |
| `consistency` | Apply directly or spawn Maker | Naming/style → direct. Pattern change → Maker |
### Direct Fix (no agent)
Apply directly with Edit tool when **all** of these are true:
- The fix is mechanical (typo, naming, formatting, import order)
- No behavioral change
- No test update needed
- Exactly one file affected
Examples: rename a variable, fix a typo in a string, reorder imports, fix indentation.
### Maker Fix (spawn agent)
Spawn a targeted Maker when the fix involves:
- Code logic changes
- New or modified tests
- Multiple files
- Any behavioral change
Provide the Maker with:
1. The specific finding(s) to address (not all findings — just the routed ones)
2. The file and line location
3. The suggested fix from the reviewer
4. The Maker's original branch (to apply fixes on top)
```
Agent(
description: "Fix: <finding description>",
prompt: "You are the MAKER archetype.
Apply this fix on branch: <maker's branch>
Finding: <source> | <severity> | <category>
Location: <file:line>
Issue: <description>
Suggested fix: <fix>
Rules:
1. Fix ONLY this issue — no other changes
2. Add/update tests if the fix changes behavior
3. Run existing tests — nothing may break
4. Commit with message: 'fix: <description>'
Do NOT refactor surrounding code.",
isolation: "worktree",
mode: "bypassPermissions"
)
```
### Writing/Prose Fix (domain-specific)
For writing projects (books, stories), voice or prose findings need special context:
```
Agent(
description: "Fix: voice drift in <file>",
prompt: "You are the MAKER archetype.
Apply this prose fix on branch: <maker's branch>
Finding: <source> | <severity> | <category>
Location: <file:line>
Issue: <description>
Voice profile to match: <load from .archeflow/config.yaml or project voice profile>
Rules:
1. Fix the flagged passage to match the voice profile
2. Do not rewrite surrounding paragraphs
3. Preserve the narrative intent — only change voice/style
4. Commit with message: 'fix: <description>'",
isolation: "worktree",
mode: "bypassPermissions"
)
```
### Design Fix (route to next cycle)
Findings that require design changes are NOT fixed in the Act phase. They become structured feedback for the Creator in the next PDCA cycle. Collect them into `act-feedback.md` (see Step 5).
---
## Step 3: Fix Application Protocol
Apply fixes in severity order: CRITICAL first, then WARNING, then INFO. Within the same severity, fix in file order (reduces context switching).
### For each fix:
1. **Apply the change** (direct edit or via Maker agent)
2. **Emit `fix.applied` event:**
```json
{
"type": "fix.applied",
"phase": "act",
"agent": "maker",
"data": {
"source": "guardian",
"finding": "Empty string bypasses validation",
"file": "src/auth/handler.ts",
"line": 48,
"severity": "CRITICAL",
"before": "<old code>",
"after": "<new code>"
},
"parent": [<seq of the review.verdict that found it>]
}
```
3. **Targeted re-check** (if the fix is non-trivial):
- Re-run only the reviewer that raised the finding
- Scope the re-check to just the changed file(s)
- If the re-check raises new findings → add them to the findings list with source `re-check:<reviewer>`
### Batching Maker Fixes
If multiple findings route to the same Maker and affect the same file or tightly coupled files, batch them into a single Maker spawn:
```
Agent(
description: "Fix: 3 findings in src/auth/",
prompt: "You are the MAKER archetype.
Apply these fixes on branch: <maker's branch>
1. [CRITICAL] src/auth/handler.ts:48 — Empty string bypass → Add length check
2. [WARNING] src/auth/handler.ts:52 — Missing rate limit → Add middleware
3. [WARNING] tests/auth.test.ts:15 — Bad test names → Rename to behavior descriptions
Fix all three. Commit each as a separate commit.
Run tests after all fixes."
)
```
Batch only within the same functional area. Don't batch unrelated fixes — the Maker loses focus.
---
## Step 4: Exit Decision
After all fixes are applied, evaluate exit conditions:
### Decision Tree
```
┌─ Count remaining CRITICAL findings (including from re-checks)
├─ CRITICAL = 0 AND completion criteria met (if defined)
│ └─ EXIT: Proceed to merge
├─ CRITICAL = 0 AND completion criteria NOT met
│ └─ CYCLE: Feed back "completion criteria failing" to Creator
├─ CRITICAL > 0 AND cycles_remaining > 0
│ └─ CYCLE: Build feedback, go to Plan phase
├─ CRITICAL > 0 AND cycles_remaining = 0
│ └─ STOP: Report to user with unresolved findings
└─ Same CRITICAL finding persisted 2+ cycles
└─ ESCALATE: Stop and ask user for guidance
```
### Emit `cycle.boundary` event:
```json
{
"type": "cycle.boundary",
"phase": "act",
"data": {
"cycle": 1,
"max_cycles": 2,
"exit_condition": "all_approved",
"met": false,
"critical_remaining": 1,
"warning_remaining": 2,
"info_remaining": 1,
"fixes_applied": 3,
"design_issues_forwarded": 1,
"next_action": "cycle"
}
}
```
---
## Step 5: Cycle Feedback Protocol
When cycling back, produce `act-feedback.md` as a structured handoff. This replaces dumping raw findings.
```markdown
## Cycle N Feedback → Cycle N+1
### For Creator (design changes needed)
| # | Source | Severity | Category | Issue | Cycles Open |
|---|--------|----------|----------|-------|-------------|
| 1 | guardian | CRITICAL | security | SQL injection in user input | 1 |
| 2 | skeptic | WARNING | design | Assumes single-tenant only | 1 |
### For Maker (implementation fixes needed)
| # | Source | Severity | Category | Issue | Cycles Open |
|---|--------|----------|----------|-------|-------------|
| 3 | sage | WARNING | testing | Test assertions too weak | 1 |
| 4 | trickster | WARNING | reliability | Error path not tested | 1 |
### Resolved in This Cycle
| # | Source | Issue | How Resolved |
|---|--------|-------|--------------|
| 5 | guardian | Missing rate limit | Added rate limiter middleware (commit abc123) |
| 6 | sage | Test names unclear | Renamed to behavior descriptions (commit def456) |
### Persisting Issues (escalation candidates)
| # | Source | Issue | Cycles Open | Action |
|---|--------|-------|-------------|--------|
| — | — | — | — | — |
```
**Routing rules** (canonical table — matches orchestration and artifact-routing skills):
This is the **canonical routing table** (single source of truth for the whole system):
| Source | Category | Routes to | Reason |
|--------|----------|-----------|--------|
@@ -296,76 +50,91 @@ When cycling back, produce `act-feedback.md` as a structured handoff. This repla
| Sage | quality, consistency | Maker | Implementation refinement |
| Sage | testing | Maker | Test gap, not design flaw |
| Trickster | reliability (design flaw) | Creator | Needs redesign |
| Trickster | reliability (test gap) | Maker | Needs more tests |
| Trickster | testing | Maker | Edge case not covered |
| Trickster | reliability (test gap), testing | Maker | Needs more tests |
**Disambiguation rule:** When in doubt: if the fix requires changing the approach, route to Creator. If it requires changing the code within the existing approach, route to Maker.
**Disambiguation:** If the fix requires changing the approach Creator. If it requires changing code within the existing approach Maker.
### Direct Fix (no agent)
Apply with Edit tool when **all** are true:
- Mechanical (typo, naming, formatting, import order)
- No behavioral change
- No test update needed
- Single file
### Maker Fix (spawn agent)
Spawn a targeted Maker when the fix involves code logic, tests, multiple files, or behavioral changes. Batch findings in the same file area into one Maker spawn.
```
Agent(
description: "Fix: <description>",
prompt: "You are the MAKER archetype.
Branch: <maker's branch>
Findings:
1. [CRITICAL] file:line — issue → suggested fix
2. [WARNING] file:line — issue → suggested fix
Rules: fix ONLY these issues, add/update tests if behavior changes,
run tests, commit each fix separately as 'fix: <description>'.
Do NOT refactor surrounding code.",
isolation: "worktree",
mode: "bypassPermissions"
)
```
### Design Fix (route to Creator)
Design findings are NOT fixed in Act. Collect them into `act-feedback.md` for the Creator in the next cycle (see Step 5).
---
## Step 6: Incremental Runs
## Step 3: Fix Application
Support starting the orchestration from any phase by reusing existing artifacts.
Apply in severity order: CRITICAL → WARNING → INFO. Within same severity, group by file.
### `--start-from check`
Re-run Check + Act on existing Do artifacts:
1. Read `.archeflow/artifacts/<run_id>/` for Maker branch and implementation summary
2. Verify the Maker branch still exists (`git branch --list`)
3. Spawn reviewers against the existing branch
4. Proceed through Act phase normally
### `--start-from act`
Re-run Act with existing Check findings:
1. Read `.archeflow/artifacts/<run_id>/` for Check phase consolidated output
2. Parse findings from the stored reviewer outputs
3. Skip finding collection (already done) — proceed from Step 2 (Fix Routing)
### `--start-from do`
Re-run Do + Check + Act with existing Plan:
1. Read `.archeflow/artifacts/<run_id>/` for Creator's proposal
2. Verify proposal exists and is parseable
3. Spawn Maker with the existing proposal
4. Proceed through Check and Act normally
### Artifact Verification
Before starting from a mid-point, verify required artifacts exist:
```
--start-from do → needs: proposal (Creator output)
--start-from check → needs: proposal + implementation (Maker branch + summary)
--start-from act → needs: proposal + implementation + review outputs
```
If artifacts are missing, report which ones and abort. Don't guess or generate placeholders.
### Event Continuity
For incremental runs, emit events with `parent` pointing to the existing artifacts' events:
1. Read the existing `<run_id>.jsonl` to find the last `seq` number
2. Continue sequence numbering from there
3. Set `parent` on the first new event to point to the last event of the prior phase
For each fix:
1. Apply the change (direct edit or via Maker agent)
2. Emit `fix.applied` event with source, finding, file, severity, before/after
3. For non-trivial fixes: re-run only the originating reviewer scoped to changed files. New findings from re-check get added with source `re-check:<reviewer>`
---
## Act Phase Checklist (Quick Reference)
## Step 4: Exit Decision
```
□ Parse all reviewer outputs into consolidated findings table
□ Deduplicate across reviewers
□ Compare against prior cycle findings (if cycle > 1)
□ Route each finding: direct fix / Maker / Creator feedback
□ Apply direct fixes first (fastest)
□ Spawn Maker(s) for code fixes (batch by file area)
□ Emit fix.applied event for each fix
□ Re-check non-trivial fixes with the originating reviewer
□ Count remaining CRITICALs after all fixes
□ Check completion criteria (if defined)
□ Decide: exit / cycle / escalate
□ If cycling: produce act-feedback.md with routed findings
□ If exiting: proceed to merge (see orchestration skill Step 4)
□ Emit cycle.boundary event
CRITICAL = 0 AND criteria met → EXIT: proceed to merge
CRITICAL = 0 AND criteria NOT met → CYCLE: feedback to Creator
CRITICAL > 0 AND cycles remaining → CYCLE: build feedback, go to Plan
CRITICAL > 0 AND no cycles left → STOP: report unresolved to user
Same CRITICAL persists 2+ cycles → ESCALATE: ask user for guidance
```
Emit `cycle.boundary` event with: cycle number, max_cycles, critical/warning/info remaining, fixes applied, next action.
---
## Step 5: Cycle Feedback
When cycling back, produce `act-feedback.md`:
```markdown
## Cycle N → Cycle N+1
### For Creator (design changes needed)
| # | Source | Severity | Category | Issue | Cycles Open |
|---|--------|----------|----------|-------|-------------|
### For Maker (implementation fixes needed)
| # | Source | Severity | Category | Issue | Cycles Open |
|---|--------|----------|----------|-------|-------------|
### Resolved This Cycle
| # | Source | Issue | How Resolved |
|---|--------|-------|--------------|
### Persisting Issues (escalation candidates)
| # | Source | Issue | Cycles Open | Action |
|---|--------|-------|-------------|--------|
```
Route findings into Creator vs Maker sections using the routing table in Step 2.

View File

@@ -1,121 +0,0 @@
---
name: attention-filters
description: Use when spawning archetype agents to decide what context each agent receives. Reduces token waste and sharpens focus by passing only relevant artifacts.
---
# Attention Filters
Each archetype needs different context. Pass only what's relevant — not everything.
| Archetype | Receives | Does NOT Receive |
|-----------|----------|-----------------|
| Explorer | Task description, codebase access | Prior proposals or reviews |
| Creator | Explorer's research + task description | Implementation details |
| Maker | Creator's proposal | Explorer's research, reviews |
| Guardian | Maker's git diff + proposal risk section | Explorer's research |
| Skeptic | Creator's proposal (focus: assumptions) | Git diff details |
| Trickster | Maker's git diff only | Everything else |
| Sage | Proposal + implementation + diff | Explorer's raw research |
## Why This Matters
- **Token cost:** A Guardian reading the Explorer's 2000-word research wastes ~2600 tokens on irrelevant context
- **Focus:** An agent with too much context drifts from its archetype's concern
- **Shadow prevention:** Over-loading context encourages rabbit-holing (Explorer) and scope creep (Maker)
## In Practice
When spawning a Check-phase agent, include only the filtered context in the prompt:
```
# Guardian receives:
"Review these changes: <git diff output>
The proposal identified these risks: <risks section only>
Verdict: APPROVED or REJECTED with findings."
# NOT:
"Here is the full research, the full proposal, the full implementation,
the full git log, and everything else we have..."
```
## Prompt Construction Templates
### Explorer
- **Receives:** Task description, file tree (max 200 lines), prior-cycle feedback (if cycle 2+)
- **Excludes:** Creator proposals, Maker diffs, reviewer outputs
- **Token target:** ~2000 tokens input
### Creator
- **Receives:** Task description, Explorer research (if available), prior-cycle feedback (if cycle 2+)
- **Excludes:** Maker diffs, reviewer outputs
- **Token target:** ~3000 tokens input
### Maker
- **Receives:** Creator's proposal (full), test strategy section, file list
- **Excludes:** Explorer research, reviewer outputs, prior-cycle feedback
- **Token target:** ~2500 tokens input
### Guardian
- **Receives:** Maker's git diff, proposal risk section, test results
- **Excludes:** Explorer research, Creator rationale, Skeptic/Sage outputs
- **Token target:** ~2000 tokens input
### Skeptic
- **Receives:** Creator's proposal (assumptions + architecture decision), confidence scores
- **Excludes:** Git diff details, Explorer raw research, other reviewer outputs
- **Token target:** ~1500 tokens input
### Trickster
- **Receives:** Maker's git diff only, attack surface summary (file types + entry points)
- **Excludes:** Proposal, research, other reviewer outputs
- **Token target:** ~1500 tokens input
### Sage
- **Receives:** Creator's proposal, Maker's implementation summary + diff, test results
- **Excludes:** Explorer raw research, other reviewer verdicts
- **Token target:** ~2500 tokens input
## Token Budget Targets
| Archetype | Fast | Standard | Thorough |
|-----------|------|----------|----------|
| Explorer | skip | 2000 | 3000 |
| Creator | 2000 | 3000 | 4000 |
| Maker | 2000 | 2500 | 3000 |
| Guardian | 1500 | 2000 | 2500 |
| Skeptic | skip | 1500 | 2000 |
| Trickster | skip | skip | 1500 |
| Sage | skip | 2500 | 3000 |
"skip" means the archetype is not spawned in that workflow tier.
## Cycle-Back Filtering
When injecting prior-cycle feedback into cycle 2+:
1. **Summary only** — pass the structured feedback table (issue, source, severity), not full reviewer artifacts
2. **Strip resolved items** — if a finding was marked Fixed in the Act phase, exclude it
3. **Compress context** — prior proposal diffs reduce to "What Changed" section only (not full re-proposal)
4. **Cap at 500 tokens** — if feedback exceeds this, summarize by severity (CRITICAL first, then WARNING, drop INFO)
## Filter Verification Checklist
Before spawning each agent, verify:
- [ ] Prompt contains ONLY the artifacts listed in that archetype's "Receives" above
- [ ] No cross-contamination from other reviewers' outputs
- [ ] Token count is within 20% of the target for the current workflow tier
- [ ] Prior-cycle feedback (if any) is summarized, not raw
- [ ] Excluded artifacts are genuinely absent (search for keywords like file paths from excluded sources)
## Context Isolation
Attention filters control *what* each agent receives. Context isolation controls *how* that context is constructed — ensuring agents operate on provided facts, not ambient knowledge.
### Rules
1. **No session bleed.** Agents receive fresh context only — constructed from task description, artifact files, or extracted sections. They must not inherit session state, chat history, or prior agent prompts.
2. **No cross-agent contamination.** An agent receives another agent's output only if the attention filter table above explicitly allows it. Guardian does not see Skeptic's output. Skeptic does not see the Maker's diff. Violations produce unreliable reviews.
3. **Controller-constructed only.** All agent context is assembled by the orchestrator from: (a) the task description, (b) artifact files on disk, or (c) extracted sections of those artifacts. Agents never pull their own context.
4. **No ambient knowledge.** Agents cannot "remember" findings from prior phases or cycles unless that information is explicitly injected via the cycle-back filtering protocol above. An agent that references information not in its prompt is hallucinating.
5. **Verification.** Before spawning each agent, confirm the constructed prompt has zero references to other agents' raw outputs that are not in the "Receives" column. Search for file paths, archetype names, and finding descriptions from excluded sources.

View File

@@ -1,233 +1,110 @@
---
name: check-phase
description: Use when you are acting as Guardian, Skeptic, Sage, or Trickster archetype in the Check phase. Defines shared review rules and output format.
description: Use when acting as Guardian, Skeptic, Sage, or Trickster in the Check phase. Defines review rules, finding format, attention filters, and spawning protocol.
---
# Check Phase
Multiple reviewers examine the Maker's implementation in parallel. Each agent definition has its specific protocol — this skill defines the shared rules.
Reviewers examine the Maker's implementation. This skill defines shared rules, finding format, and spawning protocol.
## Shared Rules
1. **Read the proposal first.** Review against the intended design, not invented requirements.
2. **Read the actual code.** Use `git diff` on the Maker's branch. Don't review descriptions alone.
3. **Structured findings.** Use the standardized finding format below for every issue.
4. **Clear verdict:** `APPROVED` or `REJECTED` with rationale.
5. **Status tokens are separate from verdicts.** The `STATUS: DONE` line signals the agent finished successfully. The `APPROVED`/`REJECTED` verdict is domain output. A reviewer can be `STATUS: DONE` with verdict `REJECTED` — that is normal. Parse both independently.
1. Review against the proposal's intended design, not invented requirements.
2. Read actual code via `git diff` on the Maker's branch.
3. Use the finding format below for every issue.
4. Give a clear verdict: `APPROVED` or `REJECTED` with rationale.
5. `STATUS: DONE` signals agent completion. `APPROVED`/`REJECTED` is domain output. Both are parsed independently.
## Finding Format
Every finding must use this format for cross-cycle tracking:
```
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:48 | CRITICAL | security | Empty string bypasses validation | Add length check before processing |
```
| src/auth/handler.ts:48 | CRITICAL | security | Empty string bypasses validation | Add length check |
**Severity:**
- **CRITICAL** — Must fix. Blocks approval.
- **WARNING** — Should fix. Doesn't block alone.
- **INFO** — Nice to have. Never blocks.
**Severity:** CRITICAL = must fix, blocks approval. WARNING = should fix, doesn't block alone. INFO = nice to have, never blocks.
**Categories** (use consistently for cross-cycle tracking):
- `security` — Injection, auth bypass, data exposure, secrets
- `reliability` — Error handling, edge cases, race conditions, crashes
- `design` — Architecture, assumptions, scalability, coupling
- `breaking-change` — API compatibility, schema migrations, removals
- `dependency` — New deps, version conflicts, license issues
- `quality` — Readability, maintainability, naming, duplication
- `testing` — Missing tests, weak assertions, untested paths
- `consistency` — Deviates from codebase patterns
**Categories:** `security` `reliability` `design` `breaking-change` `dependency` `quality` `testing` `consistency`
## Consolidated Output
## Evidence Requirements
After all reviewers finish, compile:
Every CRITICAL or WARNING must include concrete evidence. Without evidence, downgrade to INFO.
```markdown
## Check Phase Results — Cycle N
**Valid evidence:** command output, exit codes, code citations with line numbers, git diff excerpts, reproduction steps.
### Guardian: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:52 | WARNING | security | Missing rate limit | Add rate limiter middleware |
**Banned in CRITICAL/WARNING:** "might be", "could potentially", "appears to", "seems like", "may not". Rewrite with evidence or downgrade.
### Skeptic: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:30 | INFO | design | Consider caching validated tokens | Add TTL cache for token validation |
For each CRITICAL/WARNING, state: (1) what was tested, (2) what was observed, (3) what correct behavior should be.
### Sage: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| tests/auth.test.ts:15 | WARNING | testing | Test names don't describe behavior | Rename to "should reject expired tokens" |
## Attention Filters
### Trickster: REJECTED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:48 | CRITICAL | reliability | Empty string bypasses validation | Add `if (!token || token.trim() === '')` guard |
Each archetype receives only relevant context. Do not pass everything.
### Deduplication
If two reviewers raise the same issue (same file + same category), merge:
| Guardian + Skeptic | CRITICAL | security | Input not sanitized (src/api.ts:30) | Add validation |
| Archetype | Receives | Excludes |
|-----------|----------|----------|
| Guardian | Maker's git diff + proposal risk section + test results | Explorer research, Creator rationale, other reviewers |
| Skeptic | Creator's proposal (assumptions + architecture) + confidence scores | Git diff, Explorer research, other reviewers |
| Sage | Creator's proposal + Maker's diff + implementation summary + test results | Explorer raw research, other reviewer verdicts |
| Trickster | Maker's git diff + attack surface summary (file types + entry points) | Proposal, research, other reviewers |
Use the higher severity. Don't double-count in the verdict.
**Token budget targets:**
### Verdict: REJECTED — 1 critical finding
→ Build cycle feedback (see orchestration skill) and feed to Plan phase
```
| Archetype | Fast | Standard | Thorough |
|-----------|------|----------|----------|
| Guardian | 1500 | 2000 | 2500 |
| Skeptic | skip | 1500 | 2000 |
| Trickster | skip | skip | 1500 |
| Sage | skip | 2500 | 3000 |
**Context isolation:** Agents receive fresh, controller-constructed context only. No session bleed, no cross-agent contamination, no ambient knowledge. Verify zero references to excluded artifacts before spawning.
**Cycle-back filtering (cycle 2+):** Pass structured feedback table only (not full reviewer artifacts). Strip resolved items. Cap at 500 tokens — summarize by severity if exceeded.
## Reviewer Spawning Protocol
This section defines the exact sequence for spawning reviewers in the Check phase.
### Step 1: Guardian First (mandatory)
Guardian always runs first, before any other reviewer. It receives the Maker's git diff and the proposal's risk section only.
**Context for Guardian:**
- `git diff main...<maker-branch>` (the actual code changes)
- Risk section from `plan-creator.md` (if present)
- Do NOT include: Explorer research, full proposal, other reviewer outputs
```
Agent(
description: "Guardian: security and risk review for <task>",
prompt: "You are the GUARDIAN archetype.
Review the diff: <maker's diff>
Proposal risks: <risk section from plan-creator.md>
Assess: security vulnerabilities, reliability risks, breaking changes, dependency risks.
Output: APPROVED or REJECTED with findings in the standardized format.
Each finding: | Location | Severity | Category | Description | Fix |",
model: <resolve_model guardian $WORKFLOW>
)
```
Guardian always runs first. It receives the Maker's git diff and the proposal's risk section only.
Save output to `.archeflow/artifacts/${RUN_ID}/check-guardian.md`.
### Step 2: A2 Fast-Path Evaluation
After Guardian completes, parse its output before spawning other reviewers:
After Guardian completes, count CRITICAL and WARNING findings in its output. If both are zero, and not escalated, and not first cycle of a thorough workflow — skip remaining reviewers and proceed to Act phase.
```bash
CRITICAL_COUNT=$(grep -c "| CRITICAL |" ".archeflow/artifacts/${RUN_ID}/check-guardian.md" || echo 0)
WARNING_COUNT=$(grep -c "| WARNING |" ".archeflow/artifacts/${RUN_ID}/check-guardian.md" || echo 0)
### Step 3: Parallel Remaining Reviewers
# A2 fast-path: skip remaining reviewers if Guardian is clean
# Exception: first cycle of thorough workflows always spawns all reviewers
if [[ "$CRITICAL_COUNT" -eq 0 && "$WARNING_COUNT" -eq 0 \
&& "$ESCALATED" != "true" \
&& ! ("$WORKFLOW" == "thorough" && "$CYCLE" -eq 1) ]]; then
echo "Guardian fast-path: 0 CRITICAL, 0 WARNING — skipping remaining reviewers."
# Proceed directly to Act phase
fi
```
### Step 3: Parallel Reviewer Spawning
If A2 does not trigger, spawn remaining reviewers in parallel based on workflow:
If A2 does not trigger, spawn remaining reviewers in parallel:
| Workflow | Reviewers (after Guardian) |
|----------|--------------------------|
| `fast` | None (Guardian only) |
| `fast` (escalated via A1) | Skeptic + Sage |
| `fast` (escalated) | Skeptic + Sage |
| `standard` | Skeptic + Sage |
| `thorough` | Skeptic + Sage + Trickster |
Spawn all applicable reviewers in a single message with multiple Agent calls:
Each reviewer gets context per the attention filters above.
```
# Standard workflow example — spawn Skeptic and Sage in parallel:
Agent(
description: "Skeptic: challenge assumptions for <task>",
prompt: "<Skeptic prompt with Creator's proposal>",
model: <resolve_model skeptic $WORKFLOW>
)
### Step 4: Collect and Consolidate
Agent(
description: "Sage: holistic quality review for <task>",
prompt: "<Sage prompt with proposal + diff + implementation summary>",
model: <resolve_model sage $WORKFLOW>
)
For each reviewer: save to `.archeflow/artifacts/${RUN_ID}/check-<archetype>.md`, emit `review.verdict` event, record sequence number.
**Deduplication:** If two reviewers raise the same issue (same file + same category), merge into one finding using the higher severity. Don't double-count.
**Verdict:** Count CRITICAL findings across all reviewers (after dedup). Any CRITICAL = `REJECTED`. Otherwise `APPROVED`.
Example consolidated output:
```markdown
## Check Phase Results — Cycle 1
### Guardian: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth.ts:52 | WARNING | security | Missing rate limit | Add rate limiter |
### Verdict: APPROVED — 0 critical, 1 warning
```
Each reviewer gets context per the attention filters defined in `archeflow:orchestration`:
- **Skeptic:** Creator's proposal (assumptions section focus)
- **Sage:** Creator's proposal + Maker's diff + implementation summary
- **Trickster:** Maker's diff only
## Timeout Handling
### Step 4: Collect Results
Each reviewer has a **5-minute timeout**. On timeout: emit `agent.complete` with `"error": true`, log WARNING, treat as no findings, proceed.
Wait for all spawned reviewers to return. For each:
1. Save output to `.archeflow/artifacts/${RUN_ID}/check-<archetype>.md`
2. Emit `review.verdict` event with findings
3. Record sequence number for DAG parent tracking
### Timeout Handling
Each reviewer has a **5-minute timeout**. If a reviewer does not return within 5 minutes:
1. Emit `agent.complete` with `"error": true, "reason": "timeout"`
2. Log a WARNING — do not block the run
3. Treat the timed-out reviewer as having delivered no findings (neither approved nor rejected)
4. Proceed with available verdicts
If Guardian times out, this is a blocking failure — abort the Check phase and report to the user.
### Re-Check Protocol (Act Phase Fixes)
When the Act phase routes findings back to the Maker and the Maker applies fixes in a subsequent cycle, the Check phase re-runs with the updated diff. Reviewers who previously rejected should focus on whether their specific findings were addressed. The structured feedback from `act-feedback.md` provides the mapping of which findings were routed where.
---
## Evidence Requirements
Every CRITICAL or WARNING finding must include concrete evidence. Findings without evidence are downgraded to INFO.
### Evidence Types
| Type | Example | When Required |
|------|---------|---------------|
| Command output | `npm test` output showing failure | Test-related findings |
| Exit code | `exit code 1 from eslint` | Tool-based validation |
| Code citation | `src/auth.ts:48 — \`if (token) { ... }\`` | Logic or security findings |
| Git diff | `+ db.query(userInput)` (unsanitized) | Implementation review |
| Reproduction steps | "1. Send POST with empty body, 2. Observe 500" | Runtime behavior findings |
### Banned Phrases
The following phrases are not permitted in CRITICAL or WARNING findings. They indicate speculation, not evidence:
- "might be"
- "could potentially"
- "appears to"
- "seems like"
- "may not"
A finding using these phrases must either be rewritten with evidence or downgraded to INFO.
### Verification Protocol
For each CRITICAL or WARNING finding, state:
1. **What was tested** — the specific code path, input, or scenario examined
2. **What was observed** — the actual behavior or code construct found
3. **What correct behavior should be** — the expected alternative
### Downgrade Rule
If a reviewer produces a CRITICAL or WARNING finding without any of the evidence types above, the orchestrator downgrades it to INFO and emits a `decision` event:
```bash
./lib/archeflow-event.sh "$RUN_ID" decision check "" \
'{"what":"evidence_downgrade","from":"CRITICAL","to":"INFO","finding":"<description>","reviewer":"<archetype>","reason":"no evidence provided"}'
```
---
## Why Structured Findings Matter
The standardized format enables:
- **Cross-cycle tracking:** Same category + location = same issue. Can detect resolution or regression.
- **Feedback routing:** Security/design findings → Creator. Quality/testing findings → Maker.
- **Shadow detection:** CRITICAL:WARNING ratios, finding counts, and category distributions are measurable.
- **Metrics:** Severity counts feed into the orchestration summary.
**Exception:** Guardian timeout is blocking — abort Check phase and report to user.

View File

@@ -1,66 +1,129 @@
---
name: shadow-detection
description: Use when monitoring agent behavior for dysfunction, when an agent seems stuck, or when orchestration quality is degrading. Detects and corrects Jungian shadow activation in archetypes.
description: |
Corrective action framework for agent dysfunction, system health, and operational policy.
Three layers — archetype shadows, system shadows, policy boundaries — one escalation protocol.
---
# Shadow Detection
# Corrective Action Framework
Every archetype has a virtue and a shadow (its destructive inversion). Shadow activates when the virtue is pushed too far.
Detect dysfunction. Apply corrective action. Escalate if repeated.
| Archetype | Virtue | Shadow |
|-----------|--------|--------|
| Explorer | Contextual Clarity | Rabbit Hole |
| Creator | Decisive Framing | Over-Architect |
| Maker | Execution Discipline | Rogue |
| Guardian | Threat Intuition | Paranoid |
| Skeptic | Assumption Surfacing | Paralytic |
| Trickster | Adversarial Creativity | False Alarm |
| Sage | Maintainability Judgment | Bureaucrat |
Three layers, one protocol:
- **Archetype Shadows** — individual agent dysfunction (virtue pushed too far)
- **System Shadows** — orchestration-level dysfunction (process going wrong)
- **Policy Boundaries** — operational limits (time, cost, quality thresholds)
---
### Explorer -> Rabbit Hole
**Detect** (any): output >2000w without Recommendation | >3 tangents | >15 files no patterns | no synthesis in final 25%
**Correct**: "Summarize top 3 findings and one recommendation in under 300 words."
## Archetype Shadows
### Creator -> Over-Architect
**Detect** (any): >2 new abstractions for a single feature | "future-proof" in rationale | scope exceeds task by >50% | >1 new package for one feature
**Correct**: "Design for the current order of magnitude. Remove abstractions that serve hypothetical requirements."
| Archetype | Shadow | Detect (any) | Corrective Action |
|-----------|--------|-------------|-------------------|
| Explorer | Rabbit Hole | Output >2000w without Recommendation; >3 tangents; >15 files no patterns; no synthesis in final 25% | "Summarize top 3 findings and one recommendation in 300 words." |
| Creator | Over-Architect | >2 new abstractions for one feature; "future-proof" in rationale; scope exceeds task >50%; >1 new package | "Design for the current order of magnitude. Remove abstractions for hypothetical requirements." |
| Maker | Rogue | Zero test files with >=3 files changed; single monolithic commit; files outside proposal; no test run evidence | "Read the proposal. Write a test. Commit. Revert out-of-scope files." |
| Guardian | Paranoid | CRITICAL:WARNING ratio >2:1 (min 3); zero APPROVED in 3+ reviews; <50% findings include fix; findings require compromised systems | "For each CRITICAL: would a senior engineer block a PR? If not, downgrade. Every rejection needs a specific fix." |
| Skeptic | Paralytic | >7 challenges; <50% include alternatives; same concern 2+ times reworded; >3 findings outside scope | "Rank by impact. Keep top 3 with alternatives. Delete the rest." |
| Trickster | False Alarm | Findings in untouched code; >10 findings for <5 files; impossible scenarios; >3 without repro steps | "Delete findings outside the diff. Rank by likelihood x impact. Keep top 3-5." |
| Sage | Bureaucrat | Review words >2x diff lines; findings outside changeset; >2 "consider" without action; suggesting docs for trivial functions | "Limit to issues affecting maintainability in 6 months. Every finding needs a specific action." |
### Maker -> Rogue
**Detect** (any): zero test files with >=3 files changed | single monolithic commit | diff contains files not in proposal | no evidence of running tests
**Correct**: "Read the proposal. Write a test. Commit what you have. Revert changes to files not in the proposal."
### Shadow Immunity
### Guardian -> Paranoid
**Detect** (any): CRITICAL:WARNING ratio >2:1 (min 3 findings) | zero APPROVED in 3+ reviews | <50% findings include a fix | findings require already-compromised systems
**Correct**: "For each CRITICAL: would a senior engineer block a PR for this? If not, downgrade. Every rejection must include a specific fix."
Intensity alone is not a shadow. **Shadow = behavior disconnected from the goal.**
### Skeptic -> Paralytic
**Detect** (any): >7 challenges in a single review | <50% include alternatives | same concern appears 2+ times reworded | >3 findings outside task scope
**Correct**: "Rank challenges by impact. Keep top 3. Each must include a specific alternative. Delete the rest."
### Trickster -> False Alarm
**Detect** (any): findings reference code untouched by diff | >10 findings for <5 files | impossible deployment scenarios | >3 findings without repro steps
**Correct**: "Delete findings outside the diff. Rank remaining by likelihood x impact. Keep top 3-5."
### Sage -> Bureaucrat
**Detect** (any): review words >2x diff lines | findings reference files not in changeset | >2 "consider" without concrete action | suggesting docs for <5-line functions
**Correct**: "Limit to issues affecting maintainability in the next 6 months. Every finding must end with a specific action."
---
## Escalation Protocol
1. **1st detection:** Log the shadow, apply the correction prompt, let the agent continue
2. **2nd detection (same agent, same shadow):** Replace the agent -- the shadow is entrenched
3. **3+ agents shadowed in same cycle:** Escalate to user -- the task may need to be broken down
## Shadow Immunity
Some behaviors look like shadows but are not. **Rule of thumb:** shadow = behavior disconnected from the goal. Intensity alone is not a shadow.
- Explorer reading 20 files in a monorepo with scattered dependencies -- not a rabbit hole if each file is genuinely relevant
- Creator adding an abstraction -- not over-architect if the current task genuinely needs it
- Guardian blocking with 2 CRITICALs -- not paranoid if both are genuine security vulnerabilities
- Explorer reading 20 files in a monorepo with scattered deps -- not rabbit hole if each is relevant
- Guardian blocking with 2 CRITICALs -- not paranoid if both are genuine vulnerabilities
- Trickster finding 5 edge cases -- not false alarm if all are in changed code with repro steps
- Sage writing a long review -- not bureaucrat if the change is large and every finding is actionable
---
## System Shadows
Orchestration-level dysfunction that isn't tied to one archetype.
| Shadow | Detect | Corrective Action |
|--------|--------|-------------------|
| **Tunnel Vision** | All reviewers flag same category (e.g., 4 security findings, 0 quality/testing) | "Redistribute attention. Are we missing quality, testing, or design concerns?" |
| **Echo Chamber** | Unanimous approval in <30s on standard/thorough workflow | "Suspicious fast consensus. Re-run Guardian with adversarial prompt." |
| **Gold Plating** | Maker working on INFO fixes while CRITICALs remain open | "Fix CRITICALs first. Park INFO items." |
| **Analysis Paralysis** | Plan phase >2x longer than Do phase; Explorer spawned 3+ times | "Stop researching. Ship a proposal with known gaps." |
| **Cargo Cult** | Memory lesson injected but the same finding repeats anyway | "Lesson ineffective. Reword, strengthen, or remove it." |
| **Broken Window** | 3+ WARNINGs deferred across consecutive runs in the same project | "Accumulated tech debt. Schedule a cleanup sprint." |
| **Scope Creep** | Maker changes >2x files listed in proposal | "Revert to proposal scope. If more files needed, update the proposal first." |
---
## Policy Boundaries
Operational limits that protect session quality, cost, and resumability.
### Checkpoint Policy
Every **45 minutes** or **3 completed tasks** (whichever first):
1. Commit + push all work in progress
2. Write handoff summary to `control-center.md`
3. Log token spend so far
4. Compare output quality: last task vs first task
5. If quality degrading -> STOP with clean state
6. If budget >80% spent -> STOP with clean state
7. Otherwise -> continue
### Budget Gate
| Threshold | Action |
|-----------|--------|
| 50% budget spent | Log warning, continue |
| 80% budget spent | Downgrade models (sonnet->haiku for reviewers) |
| 95% budget spent | Complete current task, then STOP |
| 100% budget | STOP immediately, commit WIP |
### Circuit Breaker
| Trigger | Action |
|---------|--------|
| 3 consecutive agent failures/timeouts | STOP. Infrastructure issue, not a code problem. |
| 3 consecutive task failures in sprint | STOP. Something systemic is wrong. |
| Same shadow detected 3+ times in one cycle | STOP. Task needs to be broken down or re-scoped. |
| Test suite broken after merge | Auto-revert, STOP, report. |
### Diminishing Returns
| Signal | Action |
|--------|--------|
| Cycle N findings identical to cycle N-1 | STOP cycling. Present best result. |
| Convergence score <0.5 for 2 consecutive cycles | STOP. "This needs a different approach." |
| Reviewer finding count increases cycle over cycle | STOP. Implementation is diverging, not converging. |
### Context Pollution
| Signal | Action |
|--------|--------|
| >15 memory lessons injected into one prompt | Prune to top 5 by frequency |
| >20 findings tracked across cycles | Summarize into top 5 themes |
| Agent prompt exceeds estimated 50% of context window | Strip examples, keep rules only |
---
## Unified Escalation Protocol
All three layers use the same escalation:
| Step | Archetype Shadows | System Shadows | Policy Boundaries |
|------|-------------------|----------------|-------------------|
| **1st** | Apply corrective action, let agent continue | Apply corrective action, continue run | Apply boundary action (downgrade, checkpoint) |
| **2nd** (same issue) | Replace the agent -- shadow is entrenched | Pause run, report to user | Force stop with clean state |
| **3rd** (pattern) | Escalate to user: "task needs re-scoping" | Escalate to user: "systemic issue" | Escalate to user: "resource limits reached" |
---
## Integration
Shadow checks run **after each agent completes** during orchestration. System shadow checks run **at phase boundaries**. Policy checks run **on a timer and at task boundaries**.
The `run` skill references this framework at:
- Step 3 (Check phase): archetype shadow monitoring
- Step 4 (Act phase): convergence/diminishing returns
- Step 5 (Completion): effectiveness scoring
- Sprint skill: checkpoint policy between batches

View File

@@ -7,7 +7,7 @@ description: Use at session start when implementing features, reviewing code, de
On activation, print ONE line then proceed silently:
```
archeflow v0.7.0 · 25 skills · <domain> domain
archeflow v0.8.0 · 19 skills · <domain> domain
```
Domain auto-detected: `writing` if `colette.yaml` exists, `research` if paper/thesis files, `code` otherwise.