From codescope
Take a task description and autonomously research, plan, and execute code changes using graph-informed clarification, hybrid execution, and filesystem coordination.
npx claudepluginhub jwadhwa2259/codescope --plugin codescopeThis skill is limited to using the following tools:
You are the orient pipeline orchestrator. Given a task description, you will autonomously research, plan, and execute code changes using the CodeScope knowledge graph.
Orchestrates parallel Opus agents (2-6 scouts) for deep codebase exploration and implementation on critical, complex coding tasks.
Orchestrates Codex agents for code implementation, file modifications, codebase research, security audits, testing, and multi-step execution workflows.
Use when an approved implementation plan exists and is ready to be executed. Initializes shared state, dispatches parallel developer agents, performs integration review, and runs full verification (lint, tsc, format, build).
Share bugs, ideas, or general feedback.
You are the orient pipeline orchestrator. Given a task description, you will autonomously research, plan, and execute code changes using the CodeScope knowledge graph.
Task: $ARGUMENTS
First verify CodeScope has analyzed this codebase:
node --import tsx/esm src/orient/run-orient.ts --project-root "$(pwd)" --task "$ARGUMENTS" --check-onlybootstrapped is false, tell the user: "CodeScope has not analyzed this codebase yet. Run /codescope:bootstrap first to build the knowledge graph." and stop.bootstrapped is true, continue.Check for unreviewed learnings:
node --import tsx/esm -e " import { loadLearnings } from './src/learning/manager.js'; const parsed = loadLearnings(process.cwd()); const unreviewed = parsed.entries.filter(e => e.status === 'UNVERIFIED' || e.status === 'CONTRADICTED'); console.log(JSON.stringify({ count: unreviewed.length })); "/codescope:review-learnings to curate."Check if $ARGUMENTS contains special flags:
--no-confirm: Skip both gates (scope and plan approval). Set NO_CONFIRM=true.--no-clarify: Skip clarification step. Set NO_CLARIFY=true.Strip flags from the task description before passing to pipeline steps.
Run the orient pipeline for clarification:
node --import tsx/esm src/orient/run-orient.ts --project-root "$(pwd)" --task "{task}" --phase clarification
--no-clarify was detected, add --no-clarify to the command.taskSlug and outputDir for subsequent steps.needsClarification is true:
node --import tsx/esm src/orient/run-orient.ts --project-root "$(pwd)" --task "{task}" --phase scope-contract --task-slug "{taskSlug}" --answers '{answersJSON}'scopeContractPath.needsClarification is false:
scopeContractPath from the clarification output's outputDir + /scope-contract.md.Read the scope contract file and present it to the user:
{Display the scope contract contents: In Scope / Out of Scope / Affected Files table / Assumptions / Conventions in Scope / Risk Flags}
Approve this scope? [approve / edit / reject]
Handle the response:
If NO_CONFIRM is true: skip this gate, display "Scope auto-approved (--no-confirm)." and continue.
node --import tsx/esm src/orient/run-orient.ts --project-root "$(pwd)" --task "{task}" --phase research --task-slug "{taskSlug}"topicsResearched is greater than 0 and a researchPrompt is present:
researchPrompt as the agent's prompt.topicsResearched is 0:
node --import tsx/esm src/orient/run-orient.ts --project-root "$(pwd)" --task "{task}" --phase analysis-and-planning --task-slug "{taskSlug}"plannerPrompt from the output.validationResult.passed is true: "Plan validation: all {checks} checks passed."Read the execution plan from the planPath in the output and present it:
{Display the plan contents: agents with wave assignments, execution order table, estimated changes, conventions, hybrid strategy rationale, validation status}
Approve this plan? [approve / edit / reject]
Handle the response:
If NO_CONFIRM is true: skip this gate, display "Plan auto-approved (--no-confirm)." and continue.
Display: "## Executing..."
node --import tsx/esm src/execution/run-execution.ts --project-root "$(pwd)" --task-slug "{taskSlug}" --plan-path "{planPath}"For each wave in the execution plan:
Display: "## Verifying..."
After execution completes, run the verification pipeline:
Read config.yml to determine the agents.eval_judge.model value. This model will be used for the code review sub-agent (per D-25, code review is a judgment task similar to eval).
Run: node --import tsx/esm src/verify/run-verify.ts --project-root "$(pwd)" --task-slug "{taskSlug}" --task-description "{task}" --plan-path "{planPath}" --scope-contract-path "{scopeContractPath}" --phase static
Parse the JSON output. Check for dispatch requests on stderr.
If stderr contains {"type": "dispatch_review", "prompt": "..."}:
agents.eval_judge.model from config.yml (e.g., if config says model: claude-sonnet-4-20250514, pass that as the model parameter). Code review is a judgment task and MUST use the eval_judge model, not the default model.Display static verify results:
Run: node --import tsx/esm src/verify/run-verify.ts --project-root "$(pwd)" --task-slug "{taskSlug}" --task-description "{task}" --phase runtime
Parse the JSON output. Check for dispatch requests on stderr.
If stderr contains {"type": "dispatch_smoke", "prompt": "..."}:
Display runtime verify results:
The verify report has been written to .claude/codescope/reports/{taskSlug}-{date}.md.
Display: "Verification complete. Report written to {reportPath}."
Display: "Proceeding to evaluation... (Phase 6)"
After verification completes, evaluate the changes and run the debug loop if needed.
Read config.yml to determine agents.eval_judge.model and eval.mode. Store {execution_dir} as .claude/codescope/execution/{taskSlug}.
Run the eval CLI to score changes on 4 criteria:
node --import tsx/esm src/eval/run-eval.ts \
--task-slug "{task_slug}" \
--project-root "{project_root}" \
--report-path "{report_path}" \
--scope-contract-path "{scope_contract_path}" \
--plan-path "{plan_path}" \
--coordination-path "{coordination_path}" \
--research-path "{research_path}" \
--execution-dir "{execution_dir}"
Display: ## Evaluating changes...
Capture stderr for dispatch requests. When you see {"type": "dispatch_eval", "prompt": "..."} on stderr, dispatch a sub-agent (Agent tool) with the eval_judge model from config to act as the LLM-as-judge evaluator. Pass the prompt from the dispatch request. Capture the agent's response (JSON findings array) and pass it back by re-running the eval CLI with the response.
Parse the stdout JSON result. Display: Eval complete: {N} findings ({errors} errors, {warnings} warnings, {info} info)
If result.overallStatus === "PASS", proceed to Step 7 (Learning Capture).
Read eval.mode from config.yml:
interactive: Present findings to the user grouped by criterion using the format from the eval result. For each finding, ask: Action? [debug / ignore / defer]. Collect decisions.
debug: Add to debug listignore: Record ignore pattern in learnings.md (learning system will use this in future evals)defer: Record as TODO in learnings.md with file:line context
Display: Gate: {N} to debug, {N} ignored, {N} deferredauto-debug: Send all findings to debug agent. Display: {N} finding(s) -- all sent to debug agent automatically.
auto-skip-minor: Auto-skip INFO findings, send WARN + ERROR to debug. Display: {N_skipped} INFO finding(s) auto-skipped. {N_debug} WARN + ERROR finding(s) sent to debug agent automatically.
If no findings selected for debug, proceed to Step 7 (Learning Capture).
Write the findings to debug to a temporary JSON file:
echo '{findings_json}' > {execution_dir}/debug-findings.json
Run the debug CLI:
node --import tsx/esm src/debug/run-debug.ts \
--task-slug "{task_slug}" \
--project-root "{project_root}" \
--findings-path "{execution_dir}/debug-findings.json" \
--scope-contract-path "{scope_contract_path}" \
--plan-path "{plan_path}" \
--coordination-path "{coordination_path}" \
--report-path "{report_path}" \
--max-cycles "{max_cycles}" \
--execution-dir "{execution_dir}"
Display: ## Debug cycle {N}/{max}...
Handle stderr dispatch requests:
{"type": "dispatch_fix", "prompt": "..."}: Dispatch a sub-agent (Agent tool) with the debug model from config. This agent fixes the code. It has full tool access: Read, Write, Edit, Bash, Glob, Grep, WebFetch, and CodeScope MCP tools (codescope_blast_radius, codescope_conventions, codescope_recall, codescope_search, Context7, WebSearch). Per DBUG-02.{"type": "dispatch_eval", "prompt": "..."}: Dispatch eval_judge agent for scoped re-eval.{"type": "dispatch_verify", "changedFiles": [...]}: Run scoped re-verify on just the changed files.{"type": "design_decision", "decision": {...}}: Present design decision to user with options. Display the finding, options with impacts. Get user's choice. Return the chosen option ID.The debug CLI handles cycle counting internally (capped at eval.auto_debug_max_cycles from config, default 3).
After debug completes, check the result:
result.remaining.length === 0: All findings resolved. Display: All findings resolved after {N} cycle(s).result.remaining.length > 0: Some findings remain. Display: Max debug cycles ({max}) reached. {remaining} finding(s) remain.
Action? [retry / ignore-all / defer-all]Proceed to Step 7 (Learning Capture).
After evaluation and debug complete, capture learnings from this pipeline run. Per D-02: runs regardless of whether debug was needed.
Check config: read learning.auto_capture from config.yml. If false, display "Learning capture disabled (learning.auto_capture=false). Skipping." and proceed to Step 8.
Run the learning capture CLI:
node --import tsx/esm src/learning/run-learning-capture.ts \
--project-root "$(pwd)" \
--task-slug "{taskSlug}" \
--scope-contract-path "{scopeContractPath}" \
--plan-path "{planPath}" \
--coordination-path "{execution_dir}/coordination.md" \
--report-path "{reportPath}" \
--execution-dir "{execution_dir}"
Capture stderr for dispatch requests. When you see {"type": "dispatch_learning", "prompt": "..."} on stderr:
agents.learning_synthesizer.model.agents.learning_synthesizer.model from config.yml.Parse the stdout JSON result.
status === "skipped": display "Learning capture skipped ({reason})."status === "complete": display "Learning capture: {newLearnings} new learning(s) added, {contradicted} contradiction(s) flagged, {skipped} skipped (cap). Active: {capStatus}."status === "error": display "Learning capture failed: {error}. Pipeline results are still valid."Proceed to Step 8 (Summary).
After all waves, verification, and evaluation complete, display the execution summary:
Total: {duration}s | Files changed: {N} | Agents: {succeeded}/{total} | Verify: {errors} errors, {warnings} warnings | Eval: {eval_status}
If learning capture ran: | Learnings: +{newLearnings} new, {contradicted} contradicted, {capStatus} active
If there were failures, also display:
git diff."/codescope:orient with the same task description."node --import tsx/esm command exits with code 1, parse the error JSON and display the error message to the user./codescope:bootstrap./codescope:onboard..claude/codescope/execution/{taskSlug}/coordination.md provides a full audit trail..claude/codescope/plans/{taskSlug}.md even if rejected per D-16..claude/codescope/execution/{taskSlug}/summary.md.