From team-skills-platform
Use when auditing Claude skills and commands for quality. Supports Quick Scan (changed skills only) and Full Stocktake modes with sequential subagent batch evaluation.
npx claudepluginhub colin4k1024/tspThis skill uses the workspace's default tool permissions.
Slash command (`/skill-stocktake`) that audits all Claude skills and commands using a quality checklist + AI holistic judgment. Supports two modes: Quick Scan for recently changed skills, and Full Stocktake for a complete review.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Slash command (/skill-stocktake) that audits all Claude skills and commands using a quality checklist + AI holistic judgment. Supports two modes: Quick Scan for recently changed skills, and Full Stocktake for a complete review.
The command targets the following paths relative to the directory where it is invoked:
| Path | Description |
|---|---|
~/.claude/skills/ | Global skills (all projects) |
{cwd}/.claude/skills/ | Project-level skills (if the directory exists) |
At the start of Phase 1, the command explicitly lists which paths were found and scanned.
To include project-level skills, run from that project's root directory:
cd ~/path/to/my-project
/skill-stocktake
If the project has no .claude/skills/ directory, only global skills and commands are evaluated.
| Mode | Trigger | Duration |
|---|---|---|
| Quick Scan | results.json exists (default) | 5–10 min |
| Full Stocktake | results.json absent, or /skill-stocktake full | 20–30 min |
Results cache: ~/.claude/skills/skill-stocktake/results.json
Re-evaluate only skills that have changed since the last run (5–10 min).
~/.claude/skills/skill-stocktake/results.jsonbash ~/.claude/skills/skill-stocktake/scripts/quick-diff.sh \ ~/.claude/skills/skill-stocktake/results.json
(Project dir is auto-detected from $PWD/.claude/skills; pass it explicitly only if needed)[]: report "No changes since last run." and stopbash ~/.claude/skills/skill-stocktake/scripts/save-results.sh \ ~/.claude/skills/skill-stocktake/results.json <<< "$EVAL_RESULTS"Run: bash ~/.claude/skills/skill-stocktake/scripts/scan.sh
The script enumerates skill files, extracts frontmatter, and collects UTC mtimes.
Project dir is auto-detected from $PWD/.claude/skills; pass it explicitly only if needed.
Present the scan summary and inventory table from the script output:
Scanning:
✓ ~/.claude/skills/ (17 files)
✗ {cwd}/.claude/skills/ (not found — global skills only)
| Skill | 7d use | 30d use | Description |
|---|
Launch an Agent tool subagent (general-purpose agent) with the full inventory and checklist:
Agent(
subagent_type="general-purpose",
prompt="
Evaluate the following skill inventory against the checklist.
[INVENTORY]
[CHECKLIST]
Return JSON for each skill:
{ \"verdict\": \"Keep\"|\"Improve\"|\"Update\"|\"Retire\"|\"Merge into [X]\", \"reason\": \"...\" }
"
)
The subagent reads each skill, applies the checklist, and returns per-skill JSON:
{ "verdict": "Keep"|"Improve"|"Update"|"Retire"|"Merge into [X]", "reason": "..." }
Chunk guidance: Process ~20 skills per subagent invocation to keep context manageable. Save intermediate results to results.json (status: "in_progress") after each chunk.
After all skills are evaluated: set status: "completed", proceed to Phase 3.
Resume detection: If status: "in_progress" is found on startup, resume from the first unevaluated skill.
Each skill is evaluated against this checklist:
- [ ] Content overlap with other skills checked
- [ ] Overlap with MEMORY.md / CLAUDE.md checked
- [ ] Freshness of technical references verified (use WebSearch if tool names / CLI flags / APIs are present)
- [ ] Usage frequency considered
Verdict criteria:
| Verdict | Meaning |
|---|---|
| Keep | Useful and current |
| Improve | Worth keeping, but specific improvements needed |
| Update | Referenced technology is outdated (verify with WebSearch) |
| Retire | Low quality, stale, or cost-asymmetric |
| Merge into [X] | Substantial overlap with another skill; name the merge target |
Evaluation is holistic AI judgment — not a numeric rubric. Guiding dimensions:
Reason quality requirements — the reason field must be self-contained and decision-enabling:
"Superseded""disable-model-invocation: true already set; superseded by continuous-learning-v2 which covers all the same patterns plus confidence scoring. No unique content remains.""Overlaps with X""42-line thin content; Step 4 of chatlog-to-article already covers the same workflow. Integrate the 'article angle' tip as a note in that skill.""Too long""276 lines; Section 'Framework Comparison' (L80–140) duplicates ai-era-architecture-principles; delete it to reach ~150 lines.""Unchanged""mtime updated but content unchanged. Unique Python reference explicitly imported by rules/python/; no overlap found."| Skill | 7d use | Verdict | Reason |
|---|
~/.claude/skills/skill-stocktake/results.json:
evaluated_at: Must be set to the actual UTC time of evaluation completion.
Obtain via Bash: date -u +%Y-%m-%dT%H:%M:%SZ. Never use a date-only approximation like T00:00:00Z.
{
"evaluated_at": "2026-02-21T10:00:00Z",
"mode": "full",
"batch_progress": {
"total": 80,
"evaluated": 80,
"status": "completed"
},
"skills": {
"skill-name": {
"path": "~/.claude/skills/skill-name/SKILL.md",
"verdict": "Keep",
"reason": "Concrete, actionable, unique value for X workflow",
"mtime": "2026-01-15T08:30:00Z"
}
}
}