npx claudepluginhub aeyeops/aeo-skill-marketplace --plugin aeo-claudeThis skill uses the workspace's default tool permissions.
---
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Evaluate each validation dimension systematically. Question every assumption. Cross-reference against actual codebase artifacts.
$ARGUMENTS
If no focus specified, validate the entire preceding context (plan, code changes, discussion, or proposal).
<deliverable_check> Based on the focus area, identify:
If the requested deliverable does not exist, this is automatically a critical finding. Do not proceed to validate planning artifacts as a substitute. Status is NEEDS_ACTION until the deliverable exists. </deliverable_check>
<context_detection> Identify what you're validating:
Adapt your validation approach accordingly. </context_detection>
<source_reading> Before evaluating, re-read every primary source file in scope from disk — even files that appear in the conversation context. Files may have been edited since they were last read, and stale context produces false findings. Trust what the file contains now, not what an earlier Read result showed.
When the focus area references a command template that calls a script, that script is a primary source — read it, because the template's description of what the script does may be incomplete or outdated. The same applies to configs, schemas, and any file referenced by another file you've read.
Follow reference chains, because a script that calls another script makes both relevant to your findings: if file A calls file B which reads file C, all three are in scope.
Build a file inventory as you go. Any finding that references a file you haven't read belongs in NEEDS_VALIDATION, not ERRORS — because you're reasoning from description rather than source. </source_reading>
List every assumption in the preceding context. For each:
Examine for issues appropriate to the context type:
For Code:
For Plans/Proposals:
For Configuration:
Identify what's missing:
Testing: Only flag missing e2e tests that run the real system with real data. Never flag absent unit tests, mocks, fakes, or synthetic-data tests.
Compare against existing patterns:
CRITICAL (Resolve before proceeding)
If your evidence is an inference about behavior in a file you haven't read, this belongs in NEEDS_VALIDATION until you read that file.
ERRORS FOUND (Severity: HIGH/MEDIUM/LOW)
If your evidence is an inference about behavior in a file you haven't read, this belongs in NEEDS_VALIDATION until you read that file.
ALIGNMENT ISSUES (Conflicts with codebase or conventions)
MISSING (Gaps needing attention)
IMPROVEMENTS (Better alternatives with expected benefit)
VALIDATED (Confirmed with citations)
NEEDS VALIDATION (Default category for unverified concerns)
Use this for any concern where:
Promote to ERRORS only after reading the relevant source and confirming the problem exists.
List every file you read during this review, because this allows the reader to verify your coverage and identify files you may have missed.
path/to/file.py — relevant to: [what aspect of the review it informed]After all findings sections, output this human-readable scorecard table:
## Scorecard
| Category | Count | Action needed? |
|--------------------|-------|----------------|
| Critical | X | YES |
| Errors | X | YES |
| Alignment issues | X | YES |
| Missing | X | YES |
| Needs validation | X | YES |
| Improvements | X | no |
| Validated | X | no |
| **Status** | | **PASS / NEEDS_ACTION** |
Rules for Action needed column: Critical, Errors, Alignment, Missing, Needs validation = YES when count > 0. Improvements and Validated are always "no".
Immediately after the scorecard, output this exact summary block (parsed by automation hooks):
<ultrareview_summary>
status: [PASS|NEEDS_ACTION]
critical: [count]
errors: [count]
alignment: [count]
missing: [count]
improvements: [count]
needs_validation: [count]
validated: [count]
</ultrareview_summary>
Rules:
status: PASS only if critical=0 AND errors=0 AND alignment=0 AND missing=0 AND needs_validation=0status: NEEDS_ACTION if any actionable findings exist