From claude-starter-kit
Reviews feedback from skill retrospectives in .claude/feedback/ and updates SKILL.md files with prioritized changes, learnings sections, versions, and changelog.
npx claudepluginhub sunnypatneedi/claude-starter-kitThis skill uses the workspace's default tool permissions.
Process user feedback from skill retrospectives and update skill files to improve them over time.
Reviews session to propose lean updates to existing skills, capturing debugging patterns, data issues, queries, doc references, and workflow improvements. Invoke at session end.
Analyzes skill outcome logs, user corrections, traces, and description bloat to propose improvements for Claude Code skills. Invoke via /improve <skill>, /improve for batch ranking, or /improve memory.
Improves Claude Code skills post-use: diagnoses issues from execution (outdated/missing/unclear/wrong), previews diffs, edits SKILL.md after confirmation, logs changes.
Share bugs, ideas, or general feedback.
Process user feedback from skill retrospectives and update skill files to improve them over time.
.claude/feedback/Read all feedback files in .claude/feedback/:
ls -la .claude/feedback/retro-*.md
Look for patterns:
Prioritize updates based on:
High Priority (do first):
Medium Priority:
Low Priority:
For each skill needing updates:
If the skill doesn't have one, add at the end:
## Learnings from Use
**[Date]**: [Brief description of what was learned]
- **Feedback**: [What users reported]
- **Update**: [What we changed]
- **Result**: [Expected improvement]
If feedback suggests core changes:
At the top of the skill, track versions:
---
name: skill-name
version: 1.2.0
last_updated: 2026-01-22
changelog:
- v1.2.0 (2026-01-22): Added missing step for X based on user feedback
- v1.1.0 (2026-01-15): Updated benchmarks for Y context
- v1.0.0 (2026-01-01): Initial release
---
Move processed feedback to archive:
mkdir -p .claude/feedback/archive
mv .claude/feedback/retro-2026-01-22-*.md .claude/feedback/archive/
Keep a summary of learnings in .claude/feedback/SUMMARY.md:
# Feedback Summary
## [Skill Name]
**Total feedback sessions**: 12
**Last updated**: 2026-01-22
**Key learnings**:
- Added step for X (reported by 3 users)
- Updated benchmarks for B2B context (reported by 5 users)
- Clarified workflow around Y (reported by 2 users)
**Patterns**:
- Users in enterprise context need higher benchmarks
- Early-stage startups need more examples
- Non-technical users need clearer explanations of jargon
Step 1: Review Feedback
Read .claude/feedback/retro-2026-01-22-143022.md:
## Feedback
**Missed important steps?** yes
**Improvements needed:**
The Sean Ellis test threshold of 40% seems high for B2B enterprise products.
We're at 32% "very disappointed" but our retention is 85% D30 which is excellent.
Should the skill mention that thresholds vary by product type?
Step 2: Identify Pattern
Check other feedback files → 3 more users report B2B context needs different benchmarks.
Step 3: Update Skill
Edit .claude/skills/product-market-fit/SKILL.md:
Before:
## Sean Ellis Test (40% Rule)
"How would you feel if you could no longer use [product]?"
- ≥40% "Very disappointed" = Strong PMF
After:
## Sean Ellis Test (Context-Dependent Thresholds)
"How would you feel if you could no longer use [product]?"
**Thresholds by product type:**
- **Consumer B2C**: ≥40% "Very disappointed" = Strong PMF
- **SMB B2B**: ≥35% "Very disappointed" = Strong PMF
- **Enterprise B2B**: ≥30% "Very disappointed" = Strong PMF (longer sales cycles, different buying psychology)
**Why the difference?**
- Enterprise buyers are more rational than emotional
- Switching costs are higher (contracts, integrations)
- Retention is a better PMF signal for B2B (see Step 2)
Add to Learnings section:
## Learnings from Use
**2026-01-22**: Refined Sean Ellis thresholds by product type
- **Feedback**: 4 users reported 40% threshold too high for B2B enterprise
- **Update**: Added context-specific thresholds (B2C 40%, SMB 35%, Enterprise 30%)
- **Result**: More accurate PMF diagnosis for different product types
Step 4: Archive & Track
mv .claude/feedback/retro-2026-01-22-*.md .claude/feedback/archive/
Update .claude/feedback/SUMMARY.md:
## product-market-fit
**Total feedback sessions**: 4
**Last updated**: 2026-01-22
**Key learnings**:
- Added context-specific Sean Ellis thresholds (reported by 4 users)
- B2B needs different benchmarks than B2C
**Next improvements to consider**:
- Add industry-specific retention benchmarks
- Include examples from different verticals
Before updating any skill, ensure:
Track feedback by type to identify systemic issues:
Example: "Skill forgot to mention we need to segment cohorts by acquisition channel" Action: Add step to checklist
Example: "40% D30 retention is not 'strong' for our B2B SaaS, it's average" Action: Update benchmarks with context (B2C vs B2B vs Enterprise)
Example: "I didn't know whether to do cohort analysis before or after Sean Ellis test" Action: Number steps clearly, add workflow diagram
Example: "Skill assumes I have 1000+ users, what if I only have 50?" Action: Add "Early Stage Adaptation" section
Example: "How do I calculate D30 retention in Google Analytics?" Action: Add "Implementation in Common Tools" section
✅ Look for patterns across multiple feedback sessions ✅ Update skills incrementally (small, tested changes) ✅ Document why changes were made (Learnings section) ✅ Preserve feedback history (archive, don't delete) ✅ Version skills so users know what changed
❌ Update based on single piece of feedback (might be outlier) ❌ Make skills overly complex trying to cover every edge case ❌ Remove content without understanding why it was there ❌ Ignore feedback for more than 30 days (patterns emerge) ❌ Update without testing the new version
Create a script to summarize new feedback:
#!/bin/bash
# .claude/hooks/learning/weekly-feedback-digest.sh
echo "📊 Feedback Digest (Last 7 Days)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
find .claude/feedback -name "retro-*.md" -mtime -7 | while read -r file; do
echo ""
echo "File: $(basename $file)"
grep "Skills Used" -A 5 "$file"
grep "Improvements needed:" -A 3 "$file"
done
When feedback mentions specific issues, auto-tag:
Track improvement over time:
This skill should follow its own advice:
Learnings from Use:
[To be filled as this skill gets used and improved]
Version History:
Next Steps:
.claude/feedback/The more you use this system, the better your skills become. It's a continuous improvement loop.