Files
claude-archeflow-plugin/skills/domains/SKILL.md
Christian Nennemann b6df3d19fd feat: add automated PDCA loop, domain adapters, cost tracking, DAG renderer
- skills/run: automated PDCA execution loop with --start-from, --dry-run
- skills/artifact-routing: inter-phase artifact protocol with context injection
- skills/act-phase: structured review→fix pipeline with cycle feedback
- skills/domains: domain adapter system (writing, code, research)
- skills/cost-tracking: per-agent cost estimation, budget enforcement
- lib/archeflow-dag.sh: ASCII DAG renderer from JSONL events
- lib/archeflow-report.sh: updated with DAG section, cycle diff, --dag/--summary flags
2026-04-03 11:20:14 +02:00

12 KiB

name, description
name description
domains Domain adapter system that maps ArcheFlow concepts (code-oriented by default) to domain-specific equivalents. Enables writing, research, and other non-code workflows to use the same PDCA pipeline with domain-appropriate terminology, metrics, review focus, and context injection. <example>User: "Use ArcheFlow for my short story"</example> <example>Automatically loaded when colette.yaml is detected</example>

Domain Adapter System

ArcheFlow's PDCA pipeline and archetype system are domain-agnostic. This skill defines how to adapt them to specific domains (writing, code, research, etc.) so that events, metrics, reviews, and context use terminology that makes sense for the work being done.

Domain Registry

Domain definitions live in .archeflow/domains/<name>.yaml. Each domain maps ArcheFlow's generic concepts to domain-specific equivalents and configures what metrics to track, what reviewers should focus on, and what context agents need.

Writing Domain

# .archeflow/domains/writing.yaml
name: writing
description: "Creative writing — stories, novels, non-fiction"

# Concept mapping — how generic ArcheFlow terms translate
concepts:
  implementation: "draft/prose"
  tests: "consistency checks"
  files_changed: "word count delta"
  test_coverage: "voice drift score"
  code_review: "prose review"
  build: "compile/export"
  deploy: "publish"
  refactor: "revision"
  bug: "continuity error"
  feature: "scene/chapter"
  PR: "manuscript submission"

# Metrics — what to track instead of lines/files/tests
metrics:
  - word_count
  - voice_drift_score
  - dialect_density
  - essen_count           # Giesing Gschichten rule: food in every scene
  - scene_count
  - dialogue_ratio

# Review focus areas — override default Guardian/Sage lenses
review_focus:
  guardian:
    - plot_coherence
    - character_consistency
    - timeline_accuracy
    - continuity
  sage:
    - voice_consistency
    - prose_quality
    - dialect_authenticity
    - forbidden_pattern_violations
  skeptic:
    - premise_strength
    - character_motivation
    - ending_satisfaction
  trickster:
    - reader_confusion_points
    - pacing_dead_spots
    - suspension_of_disbelief_breaks

# Context injection — what extra files agents should read per phase
context:
  always:
    - "voice profile YAML (profiles/*.yaml)"
    - "persona YAML (personas/*.yaml)"
    - "character sheets (characters/*.yaml)"
  plan_phase:
    - "series config (colette.yaml)"
    - "previous stories (if series, for continuity)"
    - "story brief / premise"
  do_phase:
    - "scene outline from Creator"
    - "voice profile (for style reference)"
  check_phase:
    - "voice profile (for Sage drift scoring)"
    - "outline (for Guardian coherence check)"
    - "character sheets (for consistency)"

# Model preferences — domain-specific overrides
model_overrides:
  maker: sonnet       # Prose quality matters more than for code
  story-sage: sonnet  # Needs taste for voice evaluation

Code Domain (Default)

# .archeflow/domains/code.yaml
name: code
description: "Software development — applications, libraries, infrastructure"

concepts:
  implementation: "code changes"
  tests: "automated tests"
  files_changed: "files changed"
  test_coverage: "test coverage %"
  code_review: "code review"
  build: "build/compile"
  deploy: "deploy"
  refactor: "refactor"
  bug: "bug"
  feature: "feature"
  PR: "pull request"

metrics:
  - files_changed
  - lines_added
  - lines_removed
  - tests_added
  - tests_passing
  - coverage_delta

review_focus:
  guardian:
    - security_vulnerabilities
    - breaking_changes
    - dependency_risks
    - error_handling
  sage:
    - code_quality
    - test_coverage
    - documentation
    - pattern_consistency
  skeptic:
    - design_assumptions
    - scalability
    - alternative_approaches
    - edge_cases
  trickster:
    - malformed_input
    - concurrency_races
    - error_path_exploitation
    - dependency_failures

context:
  always:
    - "README.md"
    - ".archeflow/config.yaml"
  plan_phase:
    - "relevant source files (Explorer identifies)"
    - "existing tests for affected area"
  do_phase:
    - "Creator's proposal"
    - "test fixtures and helpers"
  check_phase:
    - "git diff from Maker"
    - "proposal risk section"

model_overrides: {}
  # Code domain uses default archetype model assignments

Research Domain (Example Extension)

# .archeflow/domains/research.yaml
name: research
description: "Academic or technical research — papers, analysis, literature review"

concepts:
  implementation: "draft/analysis"
  tests: "citation verification"
  files_changed: "section count"
  test_coverage: "source coverage"
  code_review: "peer review"
  build: "compile (LaTeX/PDF)"
  deploy: "submit/publish"

metrics:
  - word_count
  - citation_count
  - source_diversity
  - claim_count
  - unsupported_claims

review_focus:
  guardian:
    - factual_accuracy
    - citation_validity
    - logical_coherence
    - methodology_soundness
  sage:
    - argument_structure
    - prose_clarity
    - academic_tone
    - completeness

context:
  always:
    - "bibliography/references"
    - "research brief"
  plan_phase:
    - "prior literature notes"
    - "methodology constraints"
  check_phase:
    - "citation database"
    - "claims vs. evidence mapping"

model_overrides:
  maker: sonnet   # Research writing needs quality

Domain Detection

ArcheFlow auto-detects the domain based on project markers. Detection runs once at run.start and the result is stored in the run's event stream.

Detection Priority (highest first)

Priority Signal Domain Rationale
1 CLI flag --domain <name> as specified Explicit override always wins
2 Team preset has domain: <name> as specified Preset knows its domain
3 colette.yaml exists in project root writing Colette is the writing platform
4 *.bib or references/ exists research Bibliography signals research
5 package.json exists code Node.js project
6 Cargo.toml exists code Rust project
7 pyproject.toml exists code Python project
8 go.mod exists code Go project
9 Makefile or CMakeLists.txt exists code C/C++ project
10 No markers found code Default fallback

Detection in Team Presets

Team presets can declare their domain explicitly:

# .archeflow/teams/story-development.yaml
name: story-development
domain: writing            # <-- explicit domain
description: "Kurzgeschichten-Entwicklung"
plan: [story-explorer, creator]
do: [maker]
check: [guardian, story-sage]

When domain is set in the preset, detection is skipped entirely.

Detection Event

Domain detection emits a decision event:

{"ts":"...","run_id":"...","seq":1,"parent":[],"type":"decision","phase":"init","agent":null,"data":{"what":"domain_detection","chosen":"writing","signal":"colette.yaml exists","alternatives":[{"id":"code","reason_rejected":"No code project markers found"}]}}

How Domains Affect Orchestration

1. Concept Translation in Reports

The orchestration report and session log use domain-translated terms:

# Code domain report
- **Files changed:** 4 files, +120 -30 lines
- **Tests added:** 8 new tests

# Writing domain report (same data, different framing)
- **Word count delta:** +6004 words across 7 scenes
- **Consistency checks:** voice drift 0.12, 2 continuity fixes applied

2. Domain-Specific Event Data

Events include domain-relevant metrics in their data payload:

// Writing domain — agent.complete
{"type":"agent.complete","data":{"archetype":"maker","duration_ms":180000,"word_count":6004,"voice_drift":0.12,"scenes":7,"dialogue_ratio":0.35,"essen_count":4}}

// Code domain — agent.complete
{"type":"agent.complete","data":{"archetype":"maker","duration_ms":90000,"files_changed":5,"tests_added":12,"coverage_delta":"+3%","lines_added":245,"lines_removed":80}}

// Writing domain — run.complete
{"type":"run.complete","data":{"status":"completed","word_count":6004,"voice_drift_final":0.08,"scenes":7,"dialect_density":0.15,"cycles":1}}

// Code domain — run.complete
{"type":"run.complete","data":{"status":"completed","files_changed":4,"tests_total":20,"coverage":"87%","cycles":2}}

3. Review Focus Override

When a domain defines review_focus, reviewers receive domain-specific instructions instead of the defaults:

# Without domain adapter (code defaults):
Guardian → "Check for security vulnerabilities, breaking changes..."

# With writing domain adapter:
Guardian → "Check for plot coherence, character consistency, timeline accuracy, continuity..."

The orchestration skill reads the domain's review_focus and injects it into the reviewer prompt. The archetype's base personality (virtue, shadow, lens) stays the same — only the checklist changes.

4. Context Injection

The domain's context config tells the orchestrator which additional files to pass to each agent:

# Plan phase in writing domain:
# Orchestrator automatically includes voice profile, persona, character sheets, series config
# alongside the standard task description and Explorer output

# Check phase in writing domain:
# Guardian gets the outline (for coherence)
# Sage gets the voice profile (for drift scoring)

Context injection is additive — domain context is added on top of ArcheFlow's standard context rules (task description, prior phase output, etc.).

5. Model Overrides

If the domain specifies model_overrides, those override the default model assignment for the listed archetypes:

# Default: Maker uses whatever the workflow assigns (often haiku for cheap tasks)
# Writing domain: Maker uses sonnet (prose quality matters)
# Research domain: Maker uses sonnet (analysis quality matters)

Model overrides interact with cost tracking — the cost-tracking skill reads the effective model assignment (after domain overrides) for its estimates.

Adding a New Domain

  1. Create .archeflow/domains/<name>.yaml following the schema above
  2. Add detection signals to the priority table (or rely on --domain / team preset)
  3. Define custom archetypes if needed (e.g., story-explorer for writing)
  4. Test with --domain <name> --dry-run to verify detection and context injection

Minimum Viable Domain

Only name, concepts, and metrics are required. Everything else has sensible defaults:

name: legal
description: "Legal document drafting and review"

concepts:
  implementation: "draft"
  tests: "compliance checks"
  code_review: "legal review"

metrics:
  - clause_count
  - citation_count
  - compliance_score

Missing sections fall back to the code domain defaults.

Integration with Other Skills

  • orchestration: Reads domain config at run.start, applies concept translation, context injection, model overrides, and review focus throughout the run
  • process-log: Domain-specific event data fields are included in agent.complete and run.complete payloads
  • cost-tracking: Reads model_overrides from the active domain to calculate accurate cost estimates
  • custom-archetypes: Domain-specific archetypes (e.g., story-explorer, story-sage) are defined per-project and referenced in team presets
  • workflow-design: Custom workflows can reference a domain explicitly

Design Principles

  1. Additive, not replacing. Domains add context and translate terms. They do not change the PDCA cycle, archetype system, or event schema.
  2. Graceful degradation. If no domain config exists, everything works as before (code domain defaults).
  3. One domain per run. A run operates in exactly one domain. Multi-domain projects use separate runs.
  4. Domain config is data, not code. YAML files, no scripts. Portable across projects.