From luc
Start a Luc harness workflow to execute structured, multi-phase tasks. Works for bug fixes, features, refactoring — any task that benefits from understand → plan → implement → validate → commit structure. The harness manages phase transitions, validation, retry loops, and cognitive memory.
npx claudepluginhub aperrix/lucThis skill uses the workspace's default tool permissions.
Execute a structured workflow where an engine manages phase transitions and you focus on doing the work. The engine validates your output at each step — if validation fails, you get specific errors and retry. Memory calls (MuninnDB) persist your reasoning so future runs benefit from past decisions.
Orchestrates AI coding workflows with self-correction loops, pre-flight discipline rules, 18 hook events, 5 agents, orchestration patterns, and cross-agent support for Claude Code and Cursor.
Routes all technical tasks to workflow skills before code edits, planning, debugging, or reviews. Enforces project setup gate for memory files like project-map.md.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Share bugs, ideas, or general feedback.
Execute a structured workflow where an engine manages phase transitions and you focus on doing the work. The engine validates your output at each step — if validation fails, you get specific errors and retry. Memory calls (MuninnDB) persist your reasoning so future runs benefit from past decisions.
Define the CLI path once — all commands below use this:
CLI="${CLAUDE_SKILL_DIR}/../../core/harness/cli.py"
The vault name for all MuninnDB calls is shown in your session context (injected by SessionStart as Vault: <name>). Use it on every muninn_recall, muninn_remember, and muninn_decide call.
Keep codebase-memory and muninndb in sync with the project by delegating to a dedicated explorer agent. This keeps the main workflow context clean — you only see what you recall from muninndb, not the raw exploration.
index_status(repo_path=".") to check if the codebase index is current.Agent(
prompt="Explore and index the codebase. Follow your instructions.",
subagent_type="luc:explorer",
description="Codebase exploration"
)
Wait for the explorer to finish before starting the workflow.
python3 "$CLI" start go '<task description>'
The engine returns a phase instruction:
{
"status": "next",
"run_id": "a1b2c3d4",
"phase": "understand",
"prompt": "Explore the codebase to understand the current state...",
"required_output": ["findings", "affected_files", "approach"],
"acceptance_criteria": ["Scope is clearly identified with evidence", "Approach is explained and justified"],
"memory": {
"recall": {"mode": "deep", "limit": 5, "traversal": "causal", "tags": ["run:a1b2c3d4", "workflow:fix", "debugging"]},
"remember": true,
"decision_point": true
}
}
If status is "error", stop and report to the user. Store run_id for memory tagging.
Each phase runs in a subagent with a fresh context. This keeps your main context clean — you only see the phase prompt, the subagent's summary, and engine responses. All code exploration, diffs, and detailed work stay in the subagent's context.
For each phase:
If memory.recall is present, query MuninnDB before spawning the subagent. The recall results inform the handoff.
muninn_recall(
context="<describe current phase task in 1-2 sentences>",
mode=memory.recall.mode,
traversal_profile=memory.recall.traversal,
tags=memory.recall.tags,
limit=memory.recall.limit,
vault="<vault name>"
)
Declare what the phase will accomplish. Use acceptance_criteria from the engine response as a starting point, then add task-specific objectives.
python3 "$CLI" contract '{"objectives": ["..."], "verification": ["..."], "edge_cases": ["..."]}'
Spawn a subagent with the phase handoff. Include: the phase prompt, required_output, acceptance_criteria, recall results, and a one-paragraph summary of the previous phase (if any).
Agent(
prompt="Execute this phase:\n\nPhase: <phase name>\nTask: <task description>\nPrompt: <phase prompt>\nRequired output: <required_output as JSON>\nAcceptance criteria: <acceptance_criteria>\nPrevious phase: <summary or 'first phase'>\nRecall context: <recall results or 'none'>\nVault: <vault name>\n\nDo the work, then return a JSON with all required_output keys filled.\nAlso call muninn_remember and muninn_decide (if applicable) per the memory protocol in references/memory-protocol.md.",
subagent_type="general-purpose",
description="Phase: <phase name>"
)
The subagent works freely with all tools, calls muninndb for remember/decide, and returns its output.
Take the subagent's output and advance the engine:
python3 "$CLI" advance '<subagent output JSON>'
| Status | What happened | What to do |
|---|---|---|
"next" | Phase validated. | Summarize the completed phase in one paragraph (for the next handoff). Start the next phase at 2a. |
"retry" | Validation failed. | Read errors, re-spawn the subagent with the errors and failed output. |
"completed" | Run finished. | Go to step 3. |
"failed" | Max retries exceeded. | Report errors to user. |
"error" | Engine error. | Report to user. |
When the run completes, spawn the evaluator agent to independently verify the work:
Agent(
prompt=<JSON with run_id, workflow, task, and phases (each with name, contract, acceptance_criteria, output)>,
subagent_type="luc:evaluator",
description="End-of-run evaluation"
)
The evaluator returns a structured report with score, pass, findings, and recommendation. Present the findings to the user:
After evaluation:
muninn_link using followed_by, and link all to the run with is_part_of.muninn_consolidate to merge duplicate memories.muninn_remember with the evaluation score and findings, tagged with ["evaluation", "run:<run_id>"].muninn_remember with run metrics for cross-session tracking:
muninn_remember(
concept="run metrics",
content=<JSON with run_id, workflow, total_phases, total_retries, evaluation_score, evaluation_recommendation>,
type="event",
tags=["metrics", "run:<run_id>", "workflow:<workflow>"],
vault="<vault name>"
)
This enables /luc:doctor to analyze harness effectiveness over time — which phases retry most, which workflows score lowest, which components add value.The engine is the single source of truth for phase transitions. These rules exist because bypassing the engine breaks state tracking, memory tagging, and retry logic:
advance to move between phases — the engine validates your output and manages state.required_output — the engine checks for these specific keys.Other commands:
python3 "$CLI" contract '<json>'python3 "$CLI" abort "reason"python3 "$CLI" resumeThe default workflow covers all tasks — bug fixes, features, refactoring:
go : understand → plan → implement → validate → commit
The plan phase presents the approach to the user for approval before implementation begins.
For enrichment discipline, recall profiles, and link relation types: memory-protocol.md