From meta
Analyzes coding sessions to detect corrections and preferences, proposing targeted updates to active Skills or CLAUDE.md. Triggered by 'learn from this session' or 'update skills'.
npx claudepluginhub nicknisi/claude-plugins --plugin metaThis skill uses the workspace's default tool permissions.
This skill analyzes coding sessions to extract durable preferences from corrections and approvals, then proposes targeted updates to Skills that were active during the session. It acts as a learning mechanism across sessions, ensuring Claude improves based on feedback.
Analyzes skill executions from conversation friction, file diffs, user feedback, diagnostics, and lessons to propose concrete improvements to SKILL.md files for efficiency.
Analyzes session friction with Claude Code skills, agents, permissions, hooks; gathers config from settings.json, CLAUDE.md; proposes targeted improvements.
Analyzes Claude Code sessions to improve CLAUDE.md instructions and capture learnings. Quick mode suggests CLAUDE.md tweaks; deep mode reviews problems, patterns, preferences, and gaps.
Share bugs, ideas, or general feedback.
This skill analyzes coding sessions to extract durable preferences from corrections and approvals, then proposes targeted updates to Skills that were active during the session. It acts as a learning mechanism across sessions, ensuring Claude improves based on feedback.
The user triggers autoskill after a session where Skills were used. The skill detects signals, filters for quality, maps them to the relevant Skill files, and proposes minimal, reversible edits for review.
By default, analyze only the current session (from SessionStart to now). This ensures fresh, relevant feedback without noise from old sessions.
To analyze patterns across multiple sessions, user must explicitly request: "analyze my last 5 sessions" or "look for patterns across this week".
Trigger on explicit requests:
Do NOT activate for one-off corrections or when the user declines skill modifications.
Distinguish between skill-specific behavior and project-wide conventions:
Update Skills when:
Update CLAUDE.md when:
Example:
cn() utility for className merging" → CLAUDE.md (project convention)Scan the session for:
Corrections (highest value)
Repeated patterns (high value)
Approvals (supporting evidence)
Ignore:
When signals contradict each other, resolve using this priority order:
If contradictory signals have equal scores, ask user for clarification before proposing changes.
Example conflict:
Before proposing any change, ask:
Only propose changes that pass all four.
Worth capturing:
cn() not clsx() here")@/components/ui")NOT worth capturing (I already know this):
If I'd give the same advice to any project, it doesn't belong in a skill.
Match each signal to the Skill that was active and relevant during the session:
Update existing Skill when:
Propose new Skill when:
Update CLAUDE.md instead when:
Ignore signals when:
Scoring example:
For each proposed edit, provide:
File: path/to/SKILL.md
Section: [existing section or "new section: X"]
Confidence: HIGH | MEDIUM
Score: [confidence points]
Signal: "[exact user quote or paraphrase]"
Current text (if modifying):
> existing content
Proposed text:
> updated content
Rationale: [one sentence]
Group proposals by file. Present HIGH confidence changes first.
Session context: User corrected error handling twice during code-simplifier usage
Detected signals:
Proposed change:
File: plugins/essentials/skills/code-simplifier/SKILL.md
Section: ## When NOT to simplify
Confidence: HIGH
Score: 8 points (5 + 3)
Signal: "Don't add try-catch blocks for internal functions" + pattern of removing such blocks
Current text:
> - Don't add error handling, fallbacks, or validation for scenarios that can't happen
Proposed text:
> - Don't add error handling, fallbacks, or validation for scenarios that can't happen
> - Don't add try-catch blocks for internal functions that are called by trusted code
Rationale: User explicitly corrected this twice; it's a specific, actionable rule for this project
Always present changes for review before applying. Format:
## autoskill summary
Detected [N] durable preferences from this session.
### HIGH confidence (recommended to apply)
- [change 1] - Score: X points
- [change 2] - Score: X points
### MEDIUM confidence (review carefully)
- [change 3] - Score: X points
Apply high confidence changes? [y/n/selective]
Wait for explicit approval before editing any file.
When proposing changes to multiple files:
Example order:
cn() utility convention (HIGH, 8 points)When approved:
chore(autoskill): [brief description]All autoskill changes are reversible:
If git is available:
git log --grep="autoskill" --onelinegit revert <commit-hash>git log --grep="autoskill" --format="%H" | xargs -n1 git revertManual rollback:
git show <commit-hash>Prevention:
chore(autoskill): add error handling rule to code-simplifierUse the AskUserQuestion tool when:
Ambiguous signals:
Contradictory feedback:
Boundary decisions:
Scope uncertainty:
Example questions:
"I detected two corrections about error handling:
1. 'Don't add try-catch for internal functions'
2. 'Always validate user input'
These seem contradictory. Should I:
- Add both rules with specific contexts?
- Apply different rules to internal vs external code?
- Something else?"
Never guess or assume - when in doubt, downgrade to MEDIUM confidence and ask.