From tradermonty-claude-trading-skills
Reviews Claude Code skills using dual-axis scoring: deterministic checks for structure, scripts, tests, safety and LLM deep analysis. Enables reproducible scores, merge gating at thresholds like 90+, and improvement items across projects.
npx claudepluginhub joshuarweaver/cascade-business-ops --plugin tradermonty-claude-trading-skillsThis skill uses the workspace's default tool permissions.
Run the dual-axis reviewer script and save reports to `reports/`.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Run the dual-axis reviewer script and save reports to reports/.
The script supports:
--project-rootskills/*/SKILL.md.uv (recommended — auto-resolves pyyaml dependency via inline metadata)uv sync --extra dev or equivalent in the target projectDetermine the correct script path based on your context:
skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyThe examples below use REVIEWER as a placeholder. Set it once:
# If reviewing from the same project:
REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
# If reviewing another project (global install):
REVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
uv run "$REVIEWER" \
--project-root . \
--emit-llm-prompt \
--output-dir reports/
When reviewing a different project, point --project-root to it:
uv run "$REVIEWER" \
--project-root /path/to/other/project \
--emit-llm-prompt \
--output-dir reports/
reports/skill_review_prompt_<skill>_<timestamp>.md.uv run "$REVIEWER" \
--project-root . \
--skill <skill-name> \
--llm-review-json <path-to-llm-review.json> \
--auto-weight 0.5 \
--llm-weight 0.5 \
--output-dir reports/
--skill <name> or --seed <int>--all--skip-tests--output-dir <dir>--auto-weight for stricter deterministic gating.--llm-weight when qualitative/code-review depth is prioritized.reports/skill_review_<skill>_<timestamp>.jsonreports/skill_review_<skill>_<timestamp>.mdreports/skill_review_prompt_<skill>_<timestamp>.md (when --emit-llm-prompt is enabled)To use this skill from any project, symlink it into ~/.claude/skills/:
ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
~/.claude/skills/dual-axis-skill-reviewer
After this, Claude Code will discover the skill in all projects, and the script is accessible at ~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py.
knowledge_only skills and adjusts script/test expectations to avoid unfair penalties.skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyreferences/llm_review_schema.mdreferences/scoring_rubric.md