From domain-chassis
This skill should be used when the user asks to "review a spec", "validate a spec", "is this spec implementable", "critique this spec", "find problems in my spec", "check this design doc", "is this plan sound", "review before implementation", "review this document", "document review", or mentions reviewing specs or structured documents for correctness, implementability, cross-reference consistency, or control-flow soundness before execution begins. Provides a structured multi-round review methodology using targeted subagent prompts that trace state through control flow branches and cross-reference interfaces.
npx claudepluginhub basher83/domain-chassis --plugin domain-chassisThis skill uses the workspace's default tool permissions.
The reviewer's director holds the context, writes the subagent prompt, and triages the results.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
The reviewer's director holds the context, writes the subagent prompt, and triages the results.
When constructing the subagent prompt, include:
Round 1 is a broad sweep. Expect 5-15 findings across severity levels.
Subsequent rounds are targeted. Exclude prior findings and focus on areas not yet covered.
Two to three rounds is typically sufficient. After each round, report the severity level of findings and whether another round is likely to surface actionable issues.
Stop after findings drop to minor severity unless the user explicitly requests another round.