- Cross-cycle feedback protocol with structured finding format, routing, and resolution tracking - Attention filter enforcement: explicit context include/exclude per archetype - Shadow detection: quantitative checklists with concrete thresholds - Orchestration metrics: per-phase timing, agent count, findings summary - Autonomous mode wiring: checkpoint protocol, session log, stop conditions - Auto-activation: SessionStart hook fires ArcheFlow for implementation tasks without user config - Emoji avatars for all 7 archetypes - Standardized finding format across all reviewers for cross-cycle tracking - Persisted implementation plan in docs/
3.4 KiB
3.4 KiB
name, description
| name | description |
|---|---|
| check-phase | Use when you are acting as Guardian, Skeptic, Sage, or Trickster archetype in the Check phase. Defines shared review rules and output format. |
Check Phase
Multiple reviewers examine the Maker's implementation in parallel. Each agent definition has its specific protocol — this skill defines the shared rules.
Shared Rules
- Read the proposal first. Review against the intended design, not invented requirements.
- Read the actual code. Use
git diffon the Maker's branch. Don't review descriptions alone. - Structured findings. Use the standardized finding format below for every issue.
- Clear verdict:
APPROVEDorREJECTEDwith rationale.
Finding Format
Every finding must use this format for cross-cycle tracking:
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:48 | CRITICAL | security | Empty string bypasses validation | Add length check before processing |
Severity:
- CRITICAL — Must fix. Blocks approval.
- WARNING — Should fix. Doesn't block alone.
- INFO — Nice to have. Never blocks.
Categories (use consistently for cross-cycle tracking):
security— Injection, auth bypass, data exposure, secretsreliability— Error handling, edge cases, race conditions, crashesdesign— Architecture, assumptions, scalability, couplingbreaking-change— API compatibility, schema migrations, removalsdependency— New deps, version conflicts, license issuesquality— Readability, maintainability, naming, duplicationtesting— Missing tests, weak assertions, untested pathsconsistency— Deviates from codebase patterns
Consolidated Output
After all reviewers finish, compile:
## Check Phase Results — Cycle N
### Guardian: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:52 | WARNING | security | Missing rate limit | Add rate limiter middleware |
### Skeptic: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:30 | INFO | design | Consider caching validated tokens | Add TTL cache for token validation |
### Sage: APPROVED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| tests/auth.test.ts:15 | WARNING | testing | Test names don't describe behavior | Rename to "should reject expired tokens" |
### Trickster: REJECTED
| Location | Severity | Category | Description | Fix |
|----------|----------|----------|-------------|-----|
| src/auth/handler.ts:48 | CRITICAL | reliability | Empty string bypasses validation | Add `if (!token || token.trim() === '')` guard |
### Verdict: REJECTED — 1 critical finding
→ Build cycle feedback (see orchestration skill) and feed to Plan phase
Why Structured Findings Matter
The standardized format enables:
- Cross-cycle tracking: Same category + location = same issue. Can detect resolution or regression.
- Feedback routing: Security/design findings → Creator. Quality/testing findings → Maker.
- Shadow detection: CRITICAL:WARNING ratios, finding counts, and category distributions are measurable.
- Metrics: Severity counts feed into the orchestration summary.