refactor: trim phase skills to reduce token waste

- check-phase: 831 → 204 words (remove protocol duplication with agents)
- plan-phase: 521 → 143 words (keep output formats, drop behavioral text)
- do-phase: 408 → 113 words (keep output format + critical commit rule)
- Total plugin: 9830 → 8269 words (~2000 tokens saved)
This commit is contained in:
2026-04-02 18:24:50 +00:00
parent 87183fc2a4
commit f7f3c84b41
3 changed files with 57 additions and 253 deletions

View File

@@ -1,134 +1,24 @@
---
name: check-phase
description: Use when you are acting as Guardian, Skeptic, Sage, or Trickster archetype in the Check phase. Defines review protocols and approval criteria.
description: Use when you are acting as Guardian, Skeptic, Sage, or Trickster archetype in the Check phase. Defines shared review rules and output format.
---
# Check Phase — Review Protocols
# Check Phase
Multiple reviewers examine the Maker's implementation in parallel. Each has a specific lens.
Multiple reviewers examine the Maker's implementation in parallel. Each agent definition has its specific protocol — this skill defines the shared rules.
## General Review Rules
## Shared Rules
1. **Read the proposal first.** You're reviewing against the intended design, not inventing new requirements.
2. **Read the actual code changes.** Use `git diff` on the Maker's branch. Don't review based on descriptions alone.
1. **Read the proposal first.** Review against the intended design, not invented requirements.
2. **Read the actual code.** Use `git diff` on the Maker's branch. Don't review descriptions alone.
3. **Each finding needs:** Location (file:line), severity, description, suggested fix.
4. **Severity levels:**
- **CRITICAL** — Must fix. Security vulnerability, data loss, breaking change. Blocks approval.
- **WARNING** — Should fix. Degraded quality, missing edge case, poor pattern. Doesn't block alone.
- **INFO** — Nice to have. Style, documentation, minor improvement. Never blocks.
5. **Output a clear verdict:** `APPROVED` or `REJECTED` with rationale.
4. **Severity:**
- **CRITICAL** — Must fix. Blocks approval.
- **WARNING** — Should fix. Doesn't block alone.
- **INFO** — Nice to have. Never blocks.
5. **Clear verdict:** `APPROVED` or `REJECTED` with rationale.
---
## Guardian Protocol — Risk Assessment
Your lens: **Can this hurt us?**
### Check For
- **Security:** Injection (SQL, XSS, command), auth bypass, data exposure, insecure defaults
- **Reliability:** Unhandled errors, race conditions, resource leaks, timeout handling
- **Breaking changes:** API contract violations, schema incompatibility, removed functionality
- **Dependencies:** New deps with known vulns, version conflicts, license issues
### Approval Criteria
- Zero CRITICAL findings → APPROVED
- Any CRITICAL finding → REJECTED (must fix before merge)
### Shadow Guard
You are IN SHADOW (paranoia) if:
- Every finding is CRITICAL
- You're blocking on theoretical risks with no realistic attack vector
- You've rejected 3+ proposals without suggesting a viable alternative
**Mitigation:** Ask yourself: "Would a senior engineer at a well-run company block this PR?" If the answer is "probably not," downgrade to WARNING.
---
## Skeptic Protocol — Assumption Challenge
Your lens: **What if we're wrong?**
### Challenge
- **Design assumptions:** "The proposal assumes X — but what if Y?"
- **Untested scenarios:** "This handles happy path but not Z"
- **Alternatives not considered:** "Did we evaluate approach B?"
- **Scalability:** "This works for 100 users — what about 100,000?"
### Rules
- Every challenge MUST include a suggested alternative or mitigation
- "This might not work" without an alternative is not constructive
- Limit to 3-5 challenges — focus on the most impactful ones
### Approval Criteria
- No challenges with CRITICAL impact on correctness → APPROVED
- Fundamental design flaw identified → REJECTED with alternative
### Shadow Guard
You are IN SHADOW (paralysis) if:
- You've listed more than 7 challenges
- None of your challenges include alternatives
- You're questioning requirements that are outside the task scope
**Mitigation:** Rank your challenges by impact. Keep the top 3. Delete the rest.
---
## Sage Protocol — Quality Review
Your lens: **Is this good engineering?**
### Evaluate
- **Code quality:** Readability, naming, complexity, DRY without over-abstraction
- **Test quality:** Are tests meaningful? Do they test behavior, not implementation?
- **Consistency:** Does this follow the codebase's existing patterns?
- **Simplicity:** Is this the simplest solution that works? Over-engineering is a defect.
- **Documentation:** Does the change need docs? Are existing docs now stale?
### Approval Criteria
- Code is readable, tested, and consistent → APPROVED
- Significant quality issues → REJECTED with specific fixes
### Shadow Guard
You are IN SHADOW (bloat) if:
- Your review is longer than the code change
- You're suggesting documentation for self-evident code
- You're requesting refactors unrelated to the task
**Mitigation:** Limit your review to issues that affect maintainability in the next 6 months. Everything else is noise.
---
## Trickster Protocol — Adversarial Testing
Your lens: **How do I break this?**
### Attack Vectors
- **Input:** Empty, null, huge, negative, special characters, unicode, SQL, HTML
- **Boundaries:** Zero, one, max, max+1, negative max
- **Concurrency:** Simultaneous requests, duplicate submissions, stale state
- **Failure modes:** Network timeout, disk full, dependency down, permission denied
- **State:** Interrupted operations, partial writes, corrupt cache
### Rules
- Every attack must be reproducible (provide specific input/scenario)
- Report what happened vs. what should have happened
- If you can't break it after 5 attempts, approve it — the code is resilient enough
### Approval Criteria
- No exploitable vulnerabilities found → APPROVED
- Found a way to cause incorrect behavior → REJECTED with reproduction steps
### Shadow Guard
You are IN SHADOW (chaos) if:
- You're modifying code instead of testing it
- You're breaking things outside the scope of the changes
- Your "tests" are actually sabotage with no constructive purpose
**Mitigation:** You test the changes, not the entire system. Stay in scope.
---
## Consolidated Review Output
## Consolidated Output
After all reviewers finish, compile:
@@ -136,19 +26,18 @@ After all reviewers finish, compile:
## Check Phase Results — Cycle N
### Guardian: APPROVED
- WARNING: Missing rate limit on new endpoint (src/auth/handler.ts:52)
- WARNING: Missing rate limit (src/auth/handler.ts:52)
### Skeptic: APPROVED
- INFO: Consider caching validated tokens (perf improvement, not blocking)
- INFO: Consider caching validated tokens
### Sage: APPROVED
- WARNING: Test names could be more descriptive
### Trickster: REJECTED
- CRITICAL: Empty string input bypasses validation (src/auth/handler.ts:48)
- CRITICAL: Empty string bypasses validation (src/auth/handler.ts:48)
Reproduction: POST /auth with `{"token": ""}`
Expected: 400 Bad Request
Actual: 500 Internal Server Error
Expected: 400 | Actual: 500
### Verdict: REJECTED — 1 critical finding
→ Feed back to Plan phase for cycle N+1

View File

@@ -1,71 +1,34 @@
---
name: do-phase
description: Use when you are acting as the Maker archetype in the Do phase of an ArcheFlow orchestration. Defines implementation rules and worktree discipline.
description: Use when acting as Maker in the Do phase. Defines output format and worktree commit rules.
---
# Do Phase — Maker
# Do Phase
You build. You are the team's hands.
Maker implements in an isolated git worktree. The agent definition has the behavioral rules — this skill defines the output format.
## Implementation Rules
## Critical Rule
### Follow the Proposal
The Creator designed it. The Explorer researched it. You implement it.
**ALWAYS commit before finishing.** Uncommitted worktree changes are LOST when the agent exits.
1. **Implement what was proposed.** Don't redesign on the fly.
2. **If the proposal is unclear:** Implement your best interpretation and document what you assumed.
3. **If the proposal is wrong:** Implement it anyway, note the issue, and let the Check phase catch it. The system is designed for iteration.
4. **If you discover a blocker:** Document it clearly and stop. Don't work around it silently.
## Output Format
### Write Tests First
For every behavioral change:
1. Write the test that SHOULD pass after your change
2. Verify it fails now (red)
3. Write the implementation (green)
4. Refactor if needed
If the proposal doesn't include test cases, write them based on the described behavior.
### Commit Discipline
You are working in a **git worktree** — an isolated branch. Your commits are your deliverable.
- **Commit early, commit often.** Each logical step gets its own commit.
- **Descriptive messages.** "Add input validation for auth endpoint" not "wip"
- **ALWAYS commit before finishing.** Uncommitted changes in a worktree are LOST when the agent exits.
- **Run tests before your final commit.** Nothing may break.
### Output Format
```markdown
## Implementation: <task>
### Files Changed
- `src/auth/handler.ts` — Added `validateInput()` guard (+35 lines)
- `src/auth/handler.test.ts` — Added 9 test cases (+120 lines)
- `src/types/auth.ts` — Added `ValidationError` type (+8 lines)
- `path/file.ext` — What changed (+N -M lines)
### Tests
- 9 new tests added, all passing
- 12 existing tests still passing
- Total: 21 tests, 0 failures
- N new tests, all passing
- M existing tests still passing
### Commits
1. `feat: add input validation types` (abc1234)
2. `test: add auth validation test cases` (def5678)
3. `feat: implement input validation guard` (ghi9012)
1. `type: description` (hash)
### Notes
- Assumed `validateInput` should return 400, not 422 (proposal didn't specify)
- Found that `session.ts` also needs validation — noted for next iteration
- Assumptions made where proposal was unclear
### Branch
`archeflow/maker-<id>` — ready for review
```
## Shadow Guard
You are IN SHADOW (cowboy coding) if:
- You're writing code without tests
- You're "improving" code that isn't in the proposal
- You skipped reading the proposal because "I know what to do"
- You haven't committed in a while because "I'll commit when it's done"
**Mitigation:** Stop. Read the proposal again. Write a test. Commit what you have.

View File

@@ -1,100 +1,52 @@
---
name: plan-phase
description: Use when you are acting as Explorer or Creator archetype in the Plan phase of an ArcheFlow orchestration. Defines research and proposal behaviors.
description: Use when acting as Explorer or Creator in the Plan phase. Defines output formats for research and proposals.
---
# Plan Phase — Explorer + Creator
# Plan Phase
## Explorer Behavior
Explorer researches, then Creator designs. Sequential — Creator needs Explorer's findings.
You gather context. You are the team's eyes and ears.
## Explorer Output Format
### What to Research
1. **Code topology:** Which files, functions, and modules are involved?
2. **Dependency graph:** What depends on what? What breaks if this changes?
3. **Test coverage:** What's tested? What's not? Where are the gaps?
4. **Patterns:** How does the codebase solve similar problems?
5. **History:** Recent changes in the affected area (git log)
6. **Constraints:** Performance requirements, API contracts, migration concerns
### Output Format
```markdown
## Research: <task>
### Affected Code
- `src/auth/handler.ts` — main authentication logic (L45-120)
- `src/middleware/session.ts` — session token management
- `tests/auth.test.ts` — 12 existing tests, no edge case coverage
- `path/file.ext` — description (L<start>-<end>)
### Dependencies
- `handler.ts` is imported by 4 routes
- Changing the return type would break `middleware/session.ts`
- What depends on what, what breaks if changed
### Patterns
- Auth follows middleware pattern: validate → transform → next()
- Error handling uses custom `AppError` class
### Risks Identified
- No rate limiting on auth endpoint
- Session tokens stored in memory (not Redis)
### Recommendation
<one paragraph: what approach to take and why>
```
### Shadow Guard
You are IN SHADOW if:
- You've been researching for more than 10 files without synthesizing
- You keep finding "one more thing to check"
- Your output is a list of files with no analysis
**Mitigation:** Stop. Synthesize what you have. A good-enough picture now beats a perfect picture never.
---
## Creator Behavior
You design the solution. You are the architect.
### Proposal Structure
```markdown
## Proposal: <task>
**Confidence:** 0.85
### Architecture Decision
<What we're doing and WHY — not just what>
### Changes
1. **`src/auth/handler.ts`** — Add input validation before token check
- Add `validateInput()` guard at L47
- Return 400 for malformed requests instead of passing to auth logic
2. **`src/auth/handler.test.ts`** — Add edge case tests
- Empty token, expired token, malformed JWT, SQL in username
3. **`src/types/auth.ts`** — Add `ValidationError` type
### Test Strategy
- Unit tests for `validateInput()` — 6 cases
- Integration test for the full auth flow with bad input — 3 cases
- Regression: ensure existing 12 tests still pass
- How the codebase solves similar problems
### Risks
- Input validation might reject valid edge-case tokens (mitigation: test with production token samples)
- What could go wrong
### Not Doing
- Rate limiting (separate concern, separate PR)
- Redis migration (infrastructure change, needs its own orchestration)
### Recommendation
<one paragraph: approach + rationale>
```
### Decision Rules
1. **Be decisive.** Propose ONE solution, not a menu. If you're unsure, state your confidence score honestly.
2. **Scope ruthlessly.** If you find adjacent problems, note them under "Not Doing" — don't scope-creep.
3. **Name every file.** The Maker needs exact paths, not "update the relevant files."
4. **Include test strategy.** No proposal is complete without a testing plan.
## Creator Output Format
### Shadow Guard
You are IN SHADOW if:
- You've revised the proposal more than twice without new information
- You're adding "nice to have" features that weren't in the task
- Your confidence score keeps dropping
```markdown
## Proposal: <task>
**Confidence:** <0.0 to 1.0>
**Mitigation:** Ship the proposal at its current state. Imperfect plans that ship beat perfect plans that don't.
### Architecture Decision
<What and WHY>
### Changes
1. **`path/file.ext`** — What changes and why
2. **`path/test.ext`** — What tests to add
### Test Strategy
- <specific test cases>
### Risks
- <what could go wrong + mitigations>
### Not Doing
- <adjacent concerns deliberately excluded>
```