From theclauu
Use for code review operations — uncommitted changes, a specific PR, or multiple PRs in parallel.
npx claudepluginhub artemis-xyz/theclauu --plugin theclauuThis skill uses the workspace's default tool permissions.
Three modes. Pick based on scope:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Three modes. Pick based on scope:
========================================
Do NOT approve or express satisfaction with changes until verification commands have been run and output confirms the claim.Violating the letter of verification is violating the spirit. Partial checks prove nothing.
git status to see what's changedgit diff to see the actual changesIf you catch yourself thinking any of these during review, STOP — you are about to skip verification:
git diff to identify what changed, then verify that relevant tests exist and were actually executed. If you haven't seen test output, you don't know.| Excuse | Reality |
|---|---|
| "The code looks correct" | Visual inspection catches formatting, not logic. Trace the actual execution path. |
| "Tests are probably passing" | "Probably" = "I didn't check." Run them or confirm they were run. |
| "Small change, quick review" | Small change, full review — just faster. Not no review. |
| "I already reviewed this mentally" | Mental review has no evidence trail. Write down what you found. |
| "The diff is clean" | Clean != correct. Clean diffs hide logic bugs. |
| "It's just a refactor" | Refactors break behavior. Verify the old tests still pass. |
| "No security concerns here" | Did you actually check for injection, auth, and data exposure? Saying "no concerns" requires evidence. |
| "Error handling is adequate" | Adequate by what standard? Check every error path. What happens on null? On timeout? On malformed input? |
========================================
Structured code review on a pull request. Produce clear, actionable comments -- then post them with user approval.
Violating the letter of verification is violating the spirit. If every checklist item isn't explicitly verified with evidence, the review is incomplete.
Ground every comment in first principles, simplicity, modularity, separation of concerns, and clean implementation. Be helpful, not pedantic -- flag what matters, skip linter-level style nitpicks.
Follow these steps exactly in order.
Accept PR number, URL, or no argument (run gh pr list and ask user to pick). Fetch with gh pr view <number> --json title,body,author,baseRefName,headRefName,files,additions,deletions,reviews,comments. Present overview (author, branch, files, status, existing reviews).
Fetch with gh pr diff <number>. Read thoroughly -- for each changed file, read enough surrounding context to understand the change in situ. For large PRs (>500 lines), organize by file or logical concern and use Explore subagents to parallelize.
Before critiquing, understand what the PR is trying to do. Read the PR body, referenced plan docs, linked issues, and commit messages. Summarize: "This PR does X because Y." Ask user to confirm if unsure.
Evaluate the PR across the dimensions defined in review-dimensions.md. Categorize each finding by severity using the system in severity-categories.md.
Present findings organized by severity (Blockers, Suggestions, Nits, Questions) with file:line references and concrete fixes. Include a one-sentence assessment and verdict (Approve / Request Changes / Comment Only). Follow the rules in severity-categories.md.
Ask user: "Want me to post this review to the PR?" Options: post as-is, edit first, or keep local. Use gh pr review <number> with --approve, --request-changes, or --comment. Post file-level comments via gh api. Confirm when posted.
Before finalizing any review, consult red-flags-and-rationalizations.md and verify you haven't fallen into any of those traps.
========================================
A single reviewer has blind spots. This skill spawns four review subagents in parallel — each invokes a different review skill so the four perspectives aren't just different prompts, they're different disciplines applied to every PR. Then synthesizes.
Without it, agents default to: one merged review per PR, serial work, no claim verification, judgments instead of failure modes.
Triggers: "thoroughly review," "defense in depth," "before merging," PRs touching production-critical paths (pipelines, deploy, auth, data DML), or multiple open PRs needing coordinated review.
Skip: trivial PRs (<20 lines), docs-only, or when you already have strong diff context.
REQUIRED SUB-SKILL: superpowers:dispatching-parallel-agents.
Each subagent invokes its assigned skill FIRST (via the Skill tool), then applies that skill's discipline to every PR in the batch.
| Name | Lens | Invokes skill | Hunts for |
|---|---|---|---|
reviewer-general | generalist | /review-pr | Correctness, design, tests, edge cases, readability |
reviewer-structural | risk-class | /review (gstack) | SQL injection, LLM trust boundaries, conditional side effects, data loss, error swallowing, atomicity |
reviewer-adversarial | break-it | /codex (challenge mode) | Concrete failure modes with input+severity; "200 IQ" adversarial second opinion |
reviewer-spec | claim-vs-impl | superpowers:requesting-code-review | Extracts PR body claims, traces each to diff, flags mismatches, verifies work-meets-requirements |
Each agent gets the full PR list — lens separation is what catches what one perspective misses.
Why this works: each review skill is a crystallized discipline. Invoking the skill means the subagent inherits that skill's checklist, ethos, and rigor — not just a lens description in a prompt. Four skills applied in parallel = four crystallized disciplines, not four paraphrases of "be thorough."
Gather PRs. Accept URL, owner/repo#N, or #N. Verify each with gh pr view.
Compose four self-contained prompts. Each must include:
<skill> skill via the Skill tool — apply its discipline as the primary reviewing lens. Do not proceed to the PRs until the skill is loaded."owner/repo/numbergh pr view --json body)gh pr diff commandsDispatch in ONE message. All four Agent calls together, subagent_type: general-purpose (required — default code-reviewer lacks Skill tool access), run_in_background: true, names from the table above. Sequential dispatch loses all parallelism.
Wait for all four. One-line completion announcements ("2 of 4 done"). Do NOT synthesize partial — cross-reviewer consensus is the signal.
Synthesize in this order:
# | Finding | Reviewers | Severity. Severity floor = loudest reviewer.Lens | PR #A | PR #B, each reviewer's explicit verdict.Offer to act: fix, post as PR comment, or leave. Post via gh pr comment <N> --repo <owner>/<repo> --body.
| Mistake | Fix |
|---|---|
| Dispatching sequentially | Single message, multiple tool calls |
| Forgetting to instruct skill invocation | Each prompt must include "FIRST: invoke <skill>" — otherwise the subagent improvises a review without the discipline |
Using code-reviewer subagent type | It lacks Skill tool access; use general-purpose |
| Synthesizing before all four return | Wait — consensus IS the signal |
| One PR per agent | Each agent reviews EVERY PR |
| Softening adversarial low-severity items | Preserve severity; user decides |
| Description summarizing workflow | Description = triggering conditions only |
| Assigning multiple skills per agent | One skill per agent — discipline concentration is the point |
========================================
Consolidated from legacy skills. Pick the mode/lens based on intent.