From code-critic
Performs multi-level AI code review (diff isolation, file context, codebase) via parallel Task agents. Standalone for git changes or PRs; triggers on 'analyze my changes' etc.
npx claudepluginhub in-the-loop-labs/pair-review --plugin code-criticThis skill uses the workspace's default tool permissions.
Perform a three-level code review analysis and return curated suggestions.
references/level1-balanced.mdreferences/level1-fast.mdreferences/level1-thorough.mdreferences/level2-balanced.mdreferences/level2-fast.mdreferences/level2-thorough.mdreferences/level3-balanced.mdreferences/level3-fast.mdreferences/level3-thorough.mdreferences/orchestration-balanced.mdreferences/orchestration-fast.mdreferences/orchestration-thorough.mdscripts/git-diff-linesSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Perform a three-level code review analysis and return curated suggestions.
This skill includes a scripts/git-diff-lines script that annotates git diff output with explicit OLD and NEW line numbers. Each subagent should invoke it by name (git-diff-lines) — the orchestrating agent must ensure the script's directory is on PATH (e.g., via PATH="<skill-dir>/scripts:$PATH").
Determine what's being reviewed:
git diff --name-only HEAD for changed files.git merge-base main HEAD), then diff against it. Get PR title and description if available.Collect:
Obtain the prompt instructions for each analysis level. Use the tier argument (default: balanced).
If get_analysis_prompt is in your available tools (i.e., the pair-review MCP server is connected):
get_analysis_prompt for each level you will run: level1, level2, and (unless skipLevel3 is true) level3get_analysis_prompt with promptType: "orchestration" for the orchestration steptier argument to each callcustomInstructions argument, or gathered from user context in Step 1) as the customInstructions parameter — this injects them into the rendered promptOtherwise (standalone mode — no MCP connection):
references/ directory:
references/level1-{tier}.mdreferences/level2-{tier}.mdreferences/level3-{tier}.md (unless skipLevel3)references/orchestration-{tier}.mdLaunch two or three Task agents simultaneously (subagent_type: "general-purpose"), depending on skipLevel3. Each task must:
git-diff-lines script to get the annotated diffPass each task the following context in its prompt:
git-diff-lines (ensure the script's directory is on PATH)Level 1 — Analyze changes in isolation (diff only)
Level 2 — Analyze changes in file context (full files)
Level 3 — Analyze changes in codebase context (architecture, dependencies) — skip if skipLevel3 is true
Launch one more Task agent (subagent_type: "general-purpose") that:
[] for Level 3 if skipped)Push the orchestrated JSON to the pair-review web UI so suggestions appear inline. This step does not require MCP — it uses a direct HTTP POST with a fallback to http://localhost:7247 when the MCP get_server_info tool is unavailable.
Determine server URL:
get_server_info MCP tool is available, call it and use the url fieldcat ~/.pair-review/config.json 2>/dev/null | jq -r '.port // empty'http://localhost:7247Build the POST body from the orchestrated output:
path (absolute working directory from pwd) and headSha (from git rev-parse HEAD)repo (owner/repo) and prNumberprovider and model to describe what ran the analysis (e.g., the AI provider and model used)summary, suggestions, and fileLevelSuggestions from the orchestrated JSONPOST via curl to ${SERVER_URL}/api/analyses/results:
curl -s --connect-timeout 3 --max-time 10 \
-X POST "${SERVER_URL}/api/analyses/results" \
-H "Content-Type: application/json" \
-d @- <<'PAYLOAD'
{ ... }
PAYLOAD
A successful import returns HTTP 201 with { runId, reviewId, totalSuggestions, status: "completed" }.
Graceful degradation: If the request fails (server not running, timeout, etc.), log a short warning and continue to the Report step. The push is best-effort.
Note in the report whether results were successfully pushed to the pair-review UI.
Present the curated suggestions to the user, organized by file. For each suggestion: