From skill-maker
Iteratively improves a skill folder based on feedback, running baseline review, diagnosing issues like triggers or output quality, and applying targeted fixes.
npx claudepluginhub vcode-sh/vibe-tools --plugin skill-maker# Skill Improvement Loop Take an existing skill and iteratively improve it based on real-world feedback. Follows the Anthropic best practice: "Iterate on a single task until Claude succeeds, then extract the winning approach." ## Input The user provides: `$ARGUMENTS` Parse the input: - Path to the skill folder - Optional: description of issues, test results, or feedback If path is missing, ask: "Which skill should I improve? Provide the path to the skill folder." ## Step 1: Understand Current State Read the skill and gather context: 1. Read SKILL.md and all supporting files 2. Run a ...
/improveImproves code quality, performance, maintainability, or style in a target via analysis, multi-persona refactoring, validation, and documentation. Supports [target], --type, --safe, --interactive flags.
/improveApplies systematic improvements to code quality, performance, maintainability, and style on target files or directories. Supports --type, --safe, --preview flags.
/improveImplements prioritized usability fixes from a prior frontend design audit. Discusses each change with user before applying, verifies fixes, summarizes changes, and offers re-evaluation.
/improveAnalyzes plugin user interactions, performance metrics, usage patterns, and errors to generate structured improvement prompts stored in ./improvements/unified-improvements.json.
/improveAudits past week's agent usage from history.jsonl and skill/plugin files for token efficiency and redundancy, generates prioritized report, DMs to Slack.
Take an existing skill and iteratively improve it based on real-world feedback. Follows the Anthropic best practice: "Iterate on a single task until Claude succeeds, then extract the winning approach."
The user provides: $ARGUMENTS
Parse the input:
If path is missing, ask: "Which skill should I improve? Provide the path to the skill folder."
Read the skill and gather context:
/skill-maker:review) to establish baseline scoreCURRENT STATE: [skill-name]
═══════════════════════════
Grade: [A-F]
Description quality: [X/10]
Key strengths: [list]
Key weaknesses: [list]
If the user provided issue descriptions, use those. Otherwise, ask:
"What problems are you experiencing with this skill? Select what applies or describe your own:"
Offer common improvement areas:
For each reported issue, determine the root cause:
## Critical headerFor each diagnosed issue:
Track all changes:
CHANGES APPLIED:
1. [File]: [What changed] - [Why]
2. [File]: [What changed] - [Why]
3. [File]: [What changed] - [Why]
After all improvements:
IMPROVEMENT RESULTS: [skill-name]
══════════════════════════════════
Before → After
Grade: [X] → [Y]
Description: [X/10] → [Y/10]
Structure: [X/6] → [Y/6]
Body Quality: [X/9] → [Y/9]
Changes made: [X] modifications across [Y] files
After improving, automatically generate triggering tests for the changed areas:
VERIFY THESE IMPROVEMENTS:
If you fixed triggers, test these queries:
1. "[query that previously failed]" → Should now trigger: YES/NO
2. "[query that over-triggered]" → Should now NOT trigger: YES/NO
If you fixed output, test this scenario:
- Input: [the scenario that was failing]
- Expected: [what should now happen]
Improvement round complete. Options:
1. Run /skill-maker:test to generate a full test suite
2. Continue improving (provide more feedback)
3. Done - the skill is ready