Files
claude-archeflow-plugin/skills/agent-diagnostic/SKILL.md
Christian Nennemann ed821097de feat: add 3-Sets agent diagnostic and attention filters
New skill: agent-diagnostic — applies the 3-Sets framework
(Tool-Set, Skill-Set, Mind-Set) to agent orchestration:

- Pre-orchestration diagnostic: check each agent's configuration
  across three dimensions, fix the weakest set first
- Chain principle: weakest set caps output (Opus + bad prompt = waste)
- Alignment principle: modest aligned agents beat excellent misaligned ones
- Attention filters: each archetype reads only relevant artifacts
- Post-orchestration learning: extract learnings to persistent memory
  structured by the three sets

Based on the 3-Sets Method diagnostic framework.
2026-04-02 18:32:18 +00:00

6.0 KiB

name, description
name description
agent-diagnostic Use before orchestration to diagnose agent configuration, and after orchestration to extract learnings. Applies the 3-Sets diagnostic (Tool-Set, Skill-Set, Mind-Set) to optimize agent alignment.

Agent Diagnostic — 3-Sets Analysis

Before spawning agents, diagnose their configuration across three dimensions. The weakest dimension caps the agent's output. Alignment across dimensions matters more than excellence in any single one.

The Three Sets

Set What It Is Agent Equivalent
Tool-Set What the agent can access File read/write, git, bash, MCP servers
Skill-Set What the agent's model can do Haiku (fast/cheap), Sonnet (balanced), Opus (deep reasoning)
Mind-Set How the agent approaches the task Archetype definition, system prompt focus

Pre-Orchestration Diagnostic

Before each orchestration, run a quick check per agent:

Tool-Set Check

  • Does the agent have the tools it needs for its role?
  • Explorer needs: file read, grep, git log — NOT file write
  • Maker needs: file read/write, git, bash, test runner
  • Guardian needs: file read, git diff — NOT file write
  • Does the agent have tools it DOESN'T need? Remove them. Excess tools create noise and distraction.

Bottleneck signal: Agent can't perform its core task due to missing capability. Fix: Add the missing tool. Don't upgrade the model — it won't compensate.

Skill-Set Check

  • Is the model tier matched to the cognitive demand?
  • Research, filtering, pattern matching → Haiku (cheap, fast)
  • Design, code generation, structured review → Sonnet (balanced)
  • Holistic judgment, complex trade-offs, architecture → Opus (deep, expensive)
Archetype Default Tier Why
Explorer Haiku Pattern matching and synthesis — breadth over depth
Creator Sonnet Design requires reasoning but not deep judgment
Maker Sonnet Code generation is Sonnet's sweet spot
Guardian Sonnet Security review needs structured reasoning
Skeptic Sonnet Assumption challenging needs analytical depth
Trickster Haiku Edge case generation is fast, creative work
Sage Sonnet Quality review needs good judgment; Opus only for large changes

Bottleneck signal: Agent produces shallow output on complex tasks, or expensive model on simple tasks. Fix: Adjust model tier. Don't add more tools — they won't compensate for reasoning limits.

Mind-Set Check

  • Is the archetype prompt focused on the right concern?
  • Does the prompt contain contradictions? ("Be thorough" + "Be fast")
  • Is the shadow definition specific enough to be detectable?
  • Is the prompt appropriately sized? (Under 500 words — longer prompts dilute focus)

Bottleneck signal: Agent produces generic output, misses its archetype's core concern, or falls into shadow immediately. Fix: Sharpen the prompt. Don't upgrade the model — a vague prompt stays vague on any model.

The Chain Principle

The weakest set determines the result:

Tool-Set: 90  Skill-Set: 90  Mind-Set: 30  →  Output: ~30

An Opus model (Skill-Set: 100) with a vague prompt (Mind-Set: 30) wastes money. A Haiku model (Skill-Set: 60) with a perfectly focused archetype (Mind-Set: 90) and the right tools (Tool-Set: 80) produces better results at 1/50th the cost.

Always fix the weakest set first.

The Alignment Principle

Three agents with modest but aligned configurations outperform three individually excellent but misaligned agents.

Signs of misalignment:

  • Explorer researches topics the Creator doesn't use in the proposal (Mind-Set mismatch)
  • Maker has tools the proposal doesn't reference (Tool-Set excess)
  • Guardian reviews at threat level inappropriate to the context (Mind-Set miscalibration)
  • Expensive model on a task that doesn't need it (Skill-Set waste)

Post-Orchestration Learning

After each orchestration, extract learnings to .archeflow/memory/:

What to Record

Tool-Set learnings:

  • "This project uses pnpm, not npm" → future Makers know
  • "The test runner is vitest, not jest" → future Makers and Sages know
  • "No database access in CI" → future Guardians adjust threat model

Skill-Set learnings:

  • "Complex type inference in this codebase requires Sonnet minimum" → future routing
  • "Haiku was sufficient for all Check phase reviews in this project" → cost savings

Mind-Set learnings:

  • "Guardian was paranoid on auth module — auth tests are comprehensive, calibrate to normal risk" → future calibration
  • "Explorer rabbit-holed in the monorepo — add 10-file cap for this codebase" → future shadow tuning

Memory Format

Write to .archeflow/memory/<category>.md:

## Tool-Set
- Package manager: pnpm (not npm)
- Test runner: vitest
- CI: GitHub Actions, no DB access in CI

## Skill-Set
- Type-heavy modules need Sonnet minimum
- Standard CRUD routes work fine with Haiku review

## Mind-Set
- Auth module: well-tested, normal risk level (don't over-guard)
- Payment module: no tests, elevated risk (Guardian should be thorough)

Keep entries factual and specific. No opinions, no predictions. Update after each orchestration — don't append endlessly, revise what changed.

Attention Filters

Each archetype reads only what's relevant from shared context:

Archetype Reads Ignores
Explorer Task description, codebase Prior proposals
Creator Explorer's research, task description Implementation details
Maker Creator's proposal Explorer's research, reviews
Guardian Maker's git diff, proposal's risk section Explorer's research
Skeptic Creator's proposal (assumptions) Git diff details
Trickster Maker's git diff only Everything else
Sage Proposal + implementation + diff Explorer's raw research

When spawning agents, pass only the relevant artifacts — not everything. This reduces context window waste and sharpens focus.