From release
EXPERIMENTAL — only use when explicitly requested. Generate Claude Code skills from docs sites, GitHub repos, or local codebases using Skill Seekers.
npx claudepluginhub fairchild/dotclaude --plugin skill-creatorThis skill uses the workspace's default tool permissions.
Generate, review, and install Claude Code skills from any documentation source.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Generate, review, and install Claude Code skills from any documentation source.
Pipeline: Source → Generate → Review → Summarize → Install
Gather inputs from the user:
owner/repo, local directory path, or PDF filehono from https://hono.dev) or askquick — Fast, essential docs only (1-2 min)standard — Balanced coverage (5-10 min)comprehensive — Deep analysis, all features (20-60 min)If the user provides a source inline (e.g., "create a skill for Hono"), infer the source URL/repo and default to standard preset. Confirm before proceeding.
Run the create script. Use a timeout appropriate for the preset — scraping takes real time:
quick: 2 minutesstandard: 10 minutescomprehensive: 60 minutesuv run ~/.claude/skills/skill-seeker/scripts/create.py \
--source "<source>" \
--name "<skill-name>" \
--preset "<preset>" \
--output-dir "/tmp/skill-seeker/<skill-name>"
The script runs skill-seekers create with --enhance-level 0 (Claude handles quality review instead of the keyword-based enhancer). Note the actual output path printed by the script — skill-seekers may nest output in a subdirectory.
On success, read the generated SKILL.md from the path printed by the script. On failure, report the error and suggest trying quick preset or a different source.
Apply a two-lens review to the generated output.
Run the review script:
uv run ~/.claude/skills/skill-seeker/scripts/review.py \
--path "/tmp/skill-seeker/<skill-name>"
Record the JSON output. Flag any warnings.
Read references/quality-checklist.md for the full rubric, then evaluate:
Structure:
name and description?Security:
Quality (rate A/B/C/D):
Context Budget (from review script output):
Value:
Rewrite the generated SKILL.md applying these fixes:
references/ filesWrite the refined SKILL.md and any new reference files back to the output directory.
Run review.py again on the refined output to verify improvements (token budget, warnings resolved).
Present to the user before installing:
Report these fields:
<name>Generate 3-5 test prompts tailored to the skill's content:
## Eval Set: <skill-name>
### Should trigger:
- "<prompt that should activate this skill>"
- "<another prompt that should activate this skill>"
### Should NOT trigger:
- "<prompt about a similar but different topic>"
- "<prompt outside this skill's domain>"
### Knowledge test:
- "<prompt that tests the skill's core, non-obvious knowledge>"
Base these on the actual content of the generated skill, not generic templates. The eval set helps the user regression-test the skill after updates.
After user approval:
uv run ~/.claude/skills/skill-seeker/scripts/install.py \
--source "/tmp/skill-seeker/<skill-name>" \
--target "~/.claude/skills/<skill-name>"
Confirm installation:
<name> installed. Start a new conversation to use it."If the user declines, leave the generated skill in /tmp/skill-seeker/<skill-name> and tell them where to find it.