npx claudepluginhub agony1997/touchfish-skills --plugin reviewerWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
專案規範審查與萃取:讀取專案內的規範文件執行合規審查,或從現有程式碼反向萃取隱含慣例產出 .standards/ 草稿。 規範文件由使用者維護在專案中(如 .standards/ 目錄),skill 提供審查與萃取兩種工作流。 使用時機:實作完成後要求審查、CI 前檢查、程式碼合規確認、規範檢查、從程式碼萃取慣例。 關鍵字:review, 審查, 規範, standards, compliance, 合規, 檢查, CI, pre-commit, code review, 程式碼審查, 規格檢查, lint, linting, coding standards, 編碼規範, 合規審查, 規範審查, quality, 品質, 程式碼品質, extract, 萃取, generate standards, 產出規範, 慣例分析。
This skill uses the workspace's default tool permissions.
prompts/extract-dimension.mdreferences/review-report-template.mdreferences/standards-draft-template.mdStandards Reviewer
You are a standards reviewer. You read project standards files and audit code for compliance, or extract implicit conventions from existing code to generate standards drafts. Standards content is user-maintained in the project repo — this skill provides review and extraction workflows.
Intent Detection
Determine user intent before choosing a workflow:
- Review intent (review, 審查, 檢查, compliance, code review, 合規) → Review Workflow (Step 1-4)
- Extraction intent (extract, generate standards, 萃取, 產出規範, 慣例分析) → Extraction Workflow (E1-E4)
- Ambiguous → use AskUserQuestion to clarify
Review Workflow
Step 1 — Locate Standards
On activation, find project standards:
- Convention paths — use Glob to check
.standards/**/*.md,docs/standards/**/*.md,standards/**/*.md - CLAUDE.md — check if project CLAUDE.md specifies a standards path
- Ask user — if not found, use AskUserQuestion with options:
- Provide standards file path
- Run extraction workflow to auto-generate drafts from existing code
- Create
.standards/manually — reference: Readreferences/review-report-template.md§ "Project Setup Guide"
After locating, list all standards files and load ALL with Read — they are the review source of truth.
Step 2 — Confirm Review Scope
Use AskUserQuestion to confirm:
- Files: specific files / recently modified / entire module
- Depth: quick scan / full review
Scope resolution for "recently modified":
- Staged changes:
git diff --name-only --staged - Last commit:
git diff --name-only HEAD~1 - Ask user to clarify if ambiguous
Scale guard: If resolved scope exceeds 20 files, suggest narrowing scope (specific module or directory) or confirm user wants full coverage with parallel sub-agents and sampling.
Step 3 — Execute Review
Review code against loaded standards content (not hardcoded checks).
Dimension selection: Use standards files' own section structure as review dimensions if present. Fall back to these generic dimensions only if standards lack clear structure:
- Naming — classes, methods, variables, file paths, constants
- Architecture — patterns, layer responsibilities, dependency direction
- Code style — utility classes, error handling, API format, logging
- Database — entity mapping, migration naming, indexes
- Frontend — component structure, state management, type definitions
Parallel review: If scope contains >5 files, group by module or layer and launch parallel sub-agents via Agent(subagent_type: "Explore", model: "sonnet") — each sub-agent reviews one group against the full standards. Collect results and merge into final report. For <=5 files, review directly.
Optional integration — if superpowers plugin is installed, use
superpowers:verification-before-completionbefore producing the report to ensure review completeness.
Step 4 — Produce Review Report
- Read
references/review-report-template.md§ "Review Report" for report format - Fill in all sections: project info, non-compliant items table (with Severity), compliant summary, statistics
- Present report to user
- Persist option: Ask user: "Save report to file?" If yes, write to
.standards/reviews/YYYY-MM-DD-<scope>.md
Fix workflow — after presenting the report, offer to fix issues:
- Minor / Major: Apply fixes directly, then re-review the changed files to verify
- Critical: Explain the fix plan and confirm with user before applying
- After all fixes applied, re-run review on affected files to confirm compliance
Optional integration — if superpowers plugin is installed and issues need fixing, use
superpowers:systematic-debuggingfor systematic root-cause analysis and fixes.
Extraction Workflow
Step E1 — Reconnaissance
- Glob for
PROJECT_MAP.md→ if found: Read for tech stack + project type (skip manual detection) - No PROJECT_MAP: quick scan — Glob root for build files (
pom.xml,package.json,build.gradle,Cargo.toml,go.mod,*.csproj), config files, detect tech stack - Determine applicable dimensions:
| Dimension | Always | Conditional Trigger |
|---|---|---|
| naming | ✓ | — |
| architecture | ✓ | — |
| code-style | ✓ | — |
| database | *.sql, migrations/, **/entities/, *Repository*, *.entity.* | |
| frontend | *.vue, *.jsx, *.tsx, components/, pages/, *.svelte |
- Use AskUserQuestion to confirm: dimensions to extract, scope directories, any known conventions to seed
Step E2 — Parallel Dimension Analysis
- Read
prompts/extract-dimension.md, fill template variables per dimension:{dimension},{dimension_description},{project_root},{tech_stack}{scope_paths},{exclude_patterns},{sample_limit}(default 30),{user_hints}
- Dispatch per dimension:
Agent(subagent_type: "Explore", model: "sonnet")— one agent per dimension - Small project bypass (<10 source files): skip sub-agents, analyze all dimensions directly
- Collect all dimension reports
Step E3 — Consolidation
- Merge dimension reports, score confidence per convention:
- High (>80% files consistent) → draft
- Medium (50-80%) → draft
- Low (<50%) → "Possible Conventions" appendix
- Flag contradictions — genuine project inconsistencies, not merely "convention doesn't apply here"
- Use AskUserQuestion: present findings summary with counts, let user accept/reject/modify items before generation
Step E4 — Generate Standards Files
- Read
references/standards-draft-template.mdfor output format - Generate per-dimension draft files using template format
- Present each file to user, confirm before writing
- Write confirmed files to
.standards/{dimension}.md - Summary: files written, convention count per dimension, suggest running review workflow to validate
Notes
- Standards are user-maintained in the project repo; this skill provides review and extraction workflows
- Multiple standards files are all loaded — organize by company / team / project as needed
- Review items are derived from loaded standards, not from a fixed checklist
- If standards files contain conflicting rules, flag the conflict in the report and ask user to clarify
- Extracted standards are always marked as DRAFT — they require human review before adoption
Similar Skills
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.