From anchor
Bootstraps canonical OpenSpec for products by interactively setting product details, discovering modules in codebase via find/Bash, mining historical Claude Code sessions, and generating module registry as source of truth.
npx claudepluginhub nraychaudhuri/anchor --plugin anchorThis skill is limited to using the following tools:
```bash
Transforms ideas into structured specifications (requirements, design, tasks) before implementation. Use when building features, fixing bugs, refactoring, or designing systems.
Guides specification-driven AI development workflows by breaking projects into 2-4 hour sessions using 13 commands like /plansession, /implement, /validate, and /phasebuild.
Authors structured NLSpecs from multi-AI research and consensus using Claude, Codex CLI, and Gemini CLI. Activated for project/feature specification needs.
Share bugs, ideas, or general feedback.
uv pip install -r ${CLAUDE_SKILL_DIR}/requirements.txt -q 2>/dev/null || true
Bootstrap the Context Companion canonical spec from scratch.
Read references/openspec_format.md now — it defines the spec.json format
this skill produces.
Check first: Run cat .companion/product.json 2>/dev/null
If it exists, use AskUserQuestion:
question: "Seed is already configured for [product]. What would you like to do?"
options: ["Re-seed from scratch", "Add new sessions only", "Cancel"]
If not configured:
If $ARGUMENTS is non-empty, use it as PRODUCT_NAME — skip this question.
Otherwise, ask:
question: "What is this product called?"
options: ["Other (type your product name)"]
Store answer as PRODUCT_NAME.
Run:
for d in ~/.claude/projects/*/; do
count=$(ls "$d"*.jsonl 2>/dev/null | wc -l | tr -d ' ')
[ "$count" -gt 0 ] && echo "$count $d"
done
Build options from projects with >0 sessions. Use AskUserQuestion:
question: "Which projects belong to [PRODUCT_NAME]? (select all that apply)"
options: ["<path> (N sessions)", ...]
multiSelect: true
Store selected directory hashes as SELECTED_HASHES.
question: "Where should the canonical spec live?"
options: [
"[current project]/product-spec (recommended — version control here)",
"~/product-specs/[PRODUCT_NAME] (global, outside any repo)",
"Other (type a custom path)"
]
Store as SPEC_LOCATION.
question: "Ready to seed [PRODUCT_NAME]?\n Projects: N (total sessions)\n Spec: [SPEC_LOCATION]"
options: ["Yes, start seeding", "Cancel"]
If yes, run setup:
uv run python ${CLAUDE_SKILL_DIR}/scripts/setup.py \
--product "[PRODUCT_NAME]" \
--spec-location "[SPEC_LOCATION]" \
--project-hashes "[hash1,hash2,...]" \
--cwd "$PWD"
Capture CONFIG_PATH from the line CONFIG_PATH=... in output.
Spawn a Task subagent with these exact instructions:
- Run:
find . -maxdepth 3 -type d | grep -v node_modules | grep -v .git | grep -v __pycache__ | sort- Read: README.md, CLAUDE.md, package.json or pyproject.toml if present
- Identify 5–15 major modules. Name them lowercase-hyphenated.
- Output ONLY a JSON array between ===MODULES_START=== and ===MODULES_END===:
===MODULES_START=== [{"name": "auth", "description": "one sentence", "paths": ["src/auth/"]}] ===MODULES_END===
Parse the output between ===MODULES_START=== and ===MODULES_END===.
Write the result to .companion/modules.json using the Write tool.
Use AskUserQuestion to confirm:
question: "Found these modules: [list names]. Does this look right?"
options: ["Yes, looks good", "I want to make changes"]
If changes needed, update .companion/modules.json conversationally then confirm again.
Read .companion/modules.json. Tell the developer:
Writing specs for N modules in parallel...
For each module, spawn a parallel Task subagent with these exact instructions (substitute REAL values — no placeholders):
Read these paths: [list actual paths from module.paths] Analyze what this module does at the product/architecture level.
Output the spec content between these EXACT markers — nothing else:
===SPEC_START=== { "module": "[actual-module-name]", "summary": "2-4 sentence description of what this module owns and does. Architecture level only, no implementation details.", "business_rules": [ "Rule stated as plain English. What the system must do.", "Another rule." ], "non_negotiables": [ "Absolute constraint that must never be violated." ], "tradeoffs": [ {"decision": "What was chosen", "reason": "Why", "accepted_cost": "What was given up"} ], "conflicts": [], "lineage": [] } ===SPEC_END===
Rules for content:
- Stay at product/architecture level. No DynamoDB field names, no function signatures.
- business_rules: what the module MUST do (behavior constraints)
- non_negotiables: what the module must NEVER do (absolute prohibitions)
- tradeoffs: only if you can identify a clear architectural choice with accepted cost
- Empty arrays are fine if nothing applies.
After ALL subagents complete:
===SPEC_START=== and ===SPEC_END===[SPEC_LOCATION]/openspec/specs/[module-name]/spec.jsonfind [SPEC_LOCATION]/openspec/specs -name "spec.json" | sort
If any modules are missing, write their spec from the subagent output directly.
IMPORTANT: Run mining directly in the main agent via Bash, NOT through Task subagents. Subagents can return before their background processes finish writing files, causing a race condition with the reconciler. Direct Bash calls block until complete.
Tell the developer: Mining N sessions across M projects...
Get all transcript paths into a variable:
uv run python -c "
import json
from pathlib import Path
config = json.loads(Path('[CONFIG_PATH]').read_text())
for p in config['projects']:
d = Path.home() / '.claude' / 'projects' / p['hash']
if d.exists():
for f in sorted(d.glob('*.jsonl')):
print(f)
" > /tmp/companion_transcripts.txt
wc -l /tmp/companion_transcripts.txt
(Replace [CONFIG_PATH] with the actual path from Step 0 output)
Run all sessions in one blocking Bash call:
uv run python ${CLAUDE_SKILL_DIR}/scripts/mine_sessions.py \
[CONFIG_PATH] \
$(cat /tmp/companion_transcripts.txt | tr '\n' ' ')
The script handles all sessions sequentially with retry logic built in. It prints progress to stderr and a JSON summary to stdout when fully done.
After the command returns, verify:
find [SPEC_LOCATION]/openspec/changes -maxdepth 1 -mindepth 1 -type d | sort
Check the checkpoint for any failures:
uv run python -c "
import json
from pathlib import Path
cp = Path('[CONFIG_PATH]').parent / 'mined_sessions.json'
if cp.exists():
data = json.loads(cp.read_text())
ok = sum(1 for v in data.values() if v.get('success'))
short = sum(1 for v in data.values() if v.get('reason') == 'too_short')
failed = sum(1 for v in data.values() if not v.get('success') and v.get('reason') == 'extraction_failed')
print(f'Mined: {ok} Too short: {short} Failed: {failed}')
"
If there are failed sessions, re-run the same command — the checkpoint skips already-successful sessions and retries only the failures.
Only start reconcile after confirming all mining subagents have returned.
Tell the developer: Reconciling session extractions into canonical specs...
uv run python ${CLAUDE_SKILL_DIR}/scripts/reconcile.py \
[CONFIG_PATH] \
.companion/modules.json
The reconciler:
Show the developer:
✓ Seed complete for [PRODUCT_NAME]
Canonical spec: [SPEC_LOCATION]/openspec/specs/
Modules: N
• [module]: N rules, N constraints, N tradeoffs, N lineage entries
...
Sessions mined: N (across M projects)
Conflicts: N
If there are conflicts, use AskUserQuestion:
question: "N conflicts need your decision. Review now?"
options: ["Yes, show me", "I'll review later"]
If yes, read conflicts_pending.json and walk through each conflict,
explaining the contradiction and asking the developer to decide.
Finally:
Next steps:
1. git add [SPEC_LOCATION] && git commit -m "feat: initial canonical spec"
2. The companion now reads this spec on every session start
openspec/changes/, can re-run