From rune
Intakes PR review feedback, code comments, or issues for structured processing before action. Verifies suggestions against codebase in default PR Review Mode; triages issues into ready-for-agent states in Issue Triage Mode.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
The counterpart to `review`. While `review` finds issues in code, `review-intake` handles the response when someone finds issues in YOUR code. Enforces a verification-first discipline: understand fully, verify against codebase reality, then act. Prevents the common failure mode of blindly implementing suggestions that break things or don't apply.
Enforces strict 6-step protocol for processing code review feedback: read fully, restate, verify against codebase, evaluate architecture fit, technical response or pushback, implement one-by-one.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Share bugs, ideas, or general feedback.
The counterpart to review. While review finds issues in code, review-intake handles the response when someone finds issues in YOUR code. Enforces a verification-first discipline: understand fully, verify against codebase reality, then act. Prevents the common failure mode of blindly implementing suggestions that break things or don't apply.
| Mode | When | Workflow |
|---|---|---|
| PR Review Mode (default) | Input is PR comments / code review feedback / external suggestions | Phases 1-6 below — absorb, comprehend, verify, evaluate, respond, implement |
| Issue Triage Mode | Input is issue tracker item (gh-42, URL, pasted issue body), or user says "triage" / "process the inbox" | See references/issue-triage.md — state machine (needs-triage → ready-for-agent / ready-for-human / needs-info / wontfix) + repro-first for bugs + AGENT-BRIEF emission |
Both modes share Phase 4.5 (Rejection KB Write) — wontfix-enhancement from Issue Triage and OUT OF SCOPE from PR Review both write .out-of-scope/<slug>.md.
/rune review-intake — manual invocation, PR Review Mode by default/rune review-intake <issue-ref> — Issue Triage Mode (issue number, URL, or path)/rune review-intake --inbox — Issue Triage Mode batch sweep (unlabeled + needs-triage + needs-info-with-activity)cook or fix receives PR review comments → PR Review Modescout (L3): verify reviewer claims against actual codebasefix (L2): apply verified changestest (L2): add tests for edge cases reviewers foundhallucination-guard (L3): verify suggested APIs/packages existsentinel (L2): re-check security if reviewer flagged concernscook (L1): Phase 5 quality gate when external review arrivesreview (L2): when self-review surfaces issues to addressFor Issue Triage Mode, see references/issue-triage.md. Both modes converge at Phase 4.5 for rejection KB writes.
Read ALL feedback items before reacting. Do not implement anything yet.
Classify each item:
| Type | Example | Priority |
|---|---|---|
| BLOCKING | Security vuln, data loss, broken build | P0 — fix now |
| BUG | Logic error, off-by-one, race condition | P1 — fix soon |
| IMPROVEMENT | Better pattern, cleaner API, perf gain | P2 — evaluate |
| STYLE | Naming, formatting, conventions | P3 — quick fix |
| OPINION | "I would do it differently" | P4 — evaluate |
For each item, restate the technical requirement in your own words.
If ANY item is unclear → STOP entirely. Do not implement clear items while unclear ones remain. Items may be interconnected — partial understanding = wrong implementation.Ask: "I understand items [X]. Need clarification on [Y] before proceeding."
Before implementing ANY suggestion, verify it against the codebase:
For each item:
1. Does the file/function reviewer references actually exist?
2. Is the reviewer's understanding of current behavior correct?
3. Will this change break existing tests?
4. Does it conflict with architectural decisions already made?
5. If suggesting a package/API — does it actually exist? (hallucination-guard)
Use scout to check claims. Use grep to find actual usage patterns.
For each verified item, decide:
| Verdict | Action |
|---|---|
| CORRECT + APPLICABLE | Queue for implementation |
| CORRECT + ALREADY DONE | Reply with evidence |
| CORRECT + OUT OF SCOPE | Acknowledge, defer to backlog |
| INCORRECT | Push back with technical reasoning |
| YAGNI | Check if feature is actually used — if unused, propose removal |
YAGNI check:
# Reviewer says "implement this properly"
# First: is anyone actually using it?
grep -r "functionName" --include="*.{ts,tsx,js,jsx}" src/
# Zero results? → "This isn't called anywhere. Remove it (YAGNI)?"
What to say:
CORRECT: "Fixed. [Brief description]." or "Good catch — [issue]. Fixed in [file]."
PUSHBACK: "[Technical reason]. Current impl handles [X] because [Y]."
UNCLEAR: "Need clarification on [specific aspect]."
What NEVER to say:
BANNED: "You're absolutely right!"
BANNED: "Great point!" / "Great catch!"
BANNED: "Thanks for catching that!"
BANNED: "I agree with your suggestion"
BANNED: "That's a good idea"
BANNED: "I see what you mean"
BANNED: Any sentence that adds no technical information
BANNED: Any performative gratitude — actions speak, not words.
Every response to a review item MUST start with an ACTION VERB:
- "Fixed — [description]"
- "Reverted — [reason]"
- "Deferred — [reason + ticket]"
- "Pushed back — [technical evidence]"
- "Clarifying — [question]"
Responses starting with praise, agreement, or social pleasantries are BLOCKED. This is a professional code review, not a conversation — signal with actions, not words.
When replying to GitHub PR comments, reply in the thread:
gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies \
-f body="Fixed — [description]"
For every item with verdict OUT OF SCOPE, write a durable record to .out-of-scope/. Oral-only rejections leave no trace and force re-litigation in future sessions.
Procedure:
slug from the rejected concept (kebab-case, lowercase, max 40 chars, recognizable without opening the file).Glob .out-of-scope/*.md, parse each frontmatter's concept + aliases, compute overlap with the new slug's tokens. If any existing concept has ≥0.7 overlap → APPEND to that file's prior_requests list instead of creating a new one.ba/references/out-of-scope-format.md — YAML frontmatter (concept / aliases / decision: rejected / rejected_at / rejected_by: review-intake / prior_requests / revisit_if) + Markdown body (concept name, "Why out of scope" reasoning, "What would change our mind" signals).Only enhancement rejections produce .out-of-scope/ entries. Bug rejections (won't fix because already fixed / not reproducible / not a bug) get a comment on the issue, not a KB file.
Execute in priority order: P0 → P1 → P2 → P3 → P4.
For each fix:
fixsentinel| Source | Trust | Approach |
|---|---|---|
| Project owner / user | High | Implement after understanding. Still verify scope. |
| Team member | Medium | Verify against codebase. Implement if correct. |
| External reviewer | Low | Skeptical by default. Verify everything. Push back if wrong. |
| AI-generated review | Lowest | Double-check every suggestion. High hallucination risk. |
When external feedback conflicts with owner's prior architectural decisions → STOP. Discuss with owner first.
Push back when:
How to push back:
## Review Intake Report
### Summary
- **Items received**: [count]
- **Blocking**: [count] | Bugs: [count] | Improvements: [count] | Style: [count]
### Verdicts
| # | Item | Type | Verdict | Action |
|---|------|------|---------|--------|
| 1 | [description] | BUG | CORRECT | Fixed in [file] |
| 2 | [description] | IMPROVEMENT | YAGNI | Proposed removal |
| 3 | [description] | OPINION | PUSHBACK | [reason] |
### Changes Applied
- `path/to/file.ts` — [description]
### Verification
- Tests: PASS ([n] passed)
- Regressions: none
| Gate | Requires | If Missing |
|---|---|---|
| Comprehension | All items understood | Ask clarifying questions, block implementation |
| Verification | Claims checked against codebase | Run scout + grep before implementing |
| Test pass | Each fix passes tests individually | Revert fix, re-diagnose |
| Failure Mode | Severity | Mitigation |
|---|---|---|
| Implementing suggestion that breaks existing feature | CRITICAL | Phase 3 verify: check existing tests before changing |
| Blindly trusting external reviewer | HIGH | Source Trust Levels: external = skeptical by default |
| Implementing 4/6 items, leaving 2 unclear | HIGH | HARD-GATE: all-or-nothing comprehension |
| Performative agreement masking misunderstanding | MEDIUM | Banned phrases list + restate-in-own-words requirement |
| Fixing tests instead of code to make review pass | HIGH | Defer to fix constraints: fix CODE, not TESTS |
OUT OF SCOPE verdict with no .out-of-scope/ file written | HIGH | Phase 4.5 HARD-GATE — oral-only rejections force re-litigation in future sessions |
Writing a deferral ("busy this quarter") to .out-of-scope/ | MEDIUM | Deferrals belong in backlog, not the rejection KB. KB entries must cite durable reasons (scope, tech constraint, strategy) |
Creating duplicate .out-of-scope/ files for the same concept | MEDIUM | Lexical-similarity gate (≥0.7 overlap) — append to existing file's prior_requests instead of duplicating |
Marking issue ready-for-agent without confirmed repro (bugs) | CRITICAL | Issue Triage Mode Step T4 HARD-GATE: bugs MUST attempt reproduction before ready-for-agent label. Confirmed repro = strong agent-brief; failed repro = needs-info |
Auto-applying wontfix to an issue without maintainer confirmation | HIGH | Triage recommends, maintainer decides. State changes (label + comment + close) confirmed via Step T3 before Step T6 acts |
| Posting triage comments without the AI-disclaimer line | MEDIUM | Issue Triage Mode requires > *This was generated by AI during triage.* disclaimer prefix on every comment — trust degrades when reporter discovers AI authorship after the fact |
Writing .out-of-scope/ for a wontfix-bug (not enhancement) | MEDIUM | Format spec: only enhancement rejections produce KB files. Bug rejections (already fixed, not reproducible, not a bug) get a comment, not a file |
.out-of-scope/<slug>.md file (new or appended)| Artifact | Format | Location |
|---|---|---|
| Review Intake Report | Markdown table | inline |
| Categorized feedback (P0–P4) | Classified list | inline |
| Verdict per item (CORRECT/PUSHBACK/YAGNI/DEFER) | Table | inline |
| Action plan (changes applied) | File list with descriptions | inline |
~2000-5000 tokens depending on feedback volume. Sonnet for evaluation logic, haiku for scout/grep verification.
Scope guardrail: review-intake processes the feedback items provided — it does not pull new reviews, open PRs, or change architectural decisions without owner confirmation.