From lawvable-awesome-legal-skills
Analyzes skill usage sessions for feedback signals like corrections and edge cases, proposing complete, precise, atomic improvements. Use via `self-improve` or automatically post-session.
npx claudepluginhub joshuarweaver/cascade-business-ops --plugin lawvable-awesome-legal-skillsThis skill uses the workspace's default tool permissions.
Analyze the current conversation and propose improvements to skills based on corrections, successes, and edge cases discovered during the work session.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze the current conversation and propose improvements to skills based on corrections, successes, and edge cases discovered during the work session.
self-improve - Analyze session and propose improvementsself-improve [skill-name] - Target a specific skillself-improve on - Enable automatic mode (hook)self-improve off - Disable automatic modeself-improve status - Show automatic mode statusself-improve [skill-name] history - Show modification historyself-improve)If skill name not provided, list available skills from skills/ directory and ask:
Which skill should I analyze for this session?
[List skills found in skills/ directory]
Scan the conversation for signals - moments where the user expressed feedback:
| Signal Type | Examples |
|---|---|
| Correction | "No", "That's not right", "It's missing X", "Always do Y", user rewrites output |
| Success | "Perfect", "Yes", "Exactly", user accepts without changes |
| Edge case | User needed a workaround, skill couldn't handle the request |
For each correction signal, evaluate if it can become a good skill instruction.
1. COMPLETE
The instruction includes all information needed to apply it. No need to look elsewhere or make assumptions.
| Grade | Example |
|---|---|
| Pass | "Structure output as: Key Terms / Risk Areas / Suggested Revisions" |
| Fail | "Use the standard format" (which format?) |
| Fail | "Follow our firm's guidelines" (what guidelines?) |
2. PRECISE
No vague or subjective terms. Two different people reading the instruction would understand it the same way.
| Grade | Example |
|---|---|
| Pass | "Flag non-compete clauses over 12 months as high risk" |
| Fail | "Be more thorough in the analysis" |
| Fail | "Make it more appropriate for clients" |
3. ATOMIC
One instruction addresses one single requirement. Multiple checks should be split into separate instructions.
| Grade | Example |
|---|---|
| Pass | "Check for governing law clause" |
| Fail | "Check for governing law, jurisdiction, and arbitration clauses" (three checks - split them) |
4. STABLE
If referencing regulations or standards, specify the version or date. The instruction should be evaluable the same way regardless of when it's read.
| Grade | Example |
|---|---|
| Pass | "Review the termination provisions under our internal policy [policy name and reference], dated December 12, 2024." |
| Fail | "Follow latest market standards" (which standards? will change over time) |
| Criteria Met | Action |
|---|---|
| All 4 criteria pass | Add to skill directly |
| Less than 4 criteria | Ask for clarification (see Step 5) |
When feedback doesn't meet all criteria, ask for what's missing using the AskUserQuestion tool:
I detected a correction but need more information to improve the skill.
You said: "[user's feedback]"
To create a clearer instruction, I need the following information:
[Structured tool call listing what's missing based on failed criteria]
If the user provides clarification → Update the instruction and proceed to Step 6.
If the user prefers the original → Proceed to Step 6 with the original instruction.
--- Learning: [skill-name] ---
Proposed additions:
1. "[exact instruction to add]"
Source: "[quote from conversation]"
2. "[exact instruction to add]"
Source: "[quote from conversation]"
---
Apply these changes? [Y/n]
Update SKILL.md
skills/[skill-name]/SKILL.mdUpdate skills/[skill-name]/CHANGELOG.md
## [DATE (format: "January 7, 2026")]
[Description of changes in natural language, 1-3 sentences]
For signals that couldn't be processed, offer to save:
Save these observations for later review?
- "[signal 1]" - Status: [why insufficient]
- "[signal 2]" - Status: [why insufficient]
If yes, append to skills/[skill-name]/OBSERVATIONS.md
rm -f ./.disabled
touch ./.disabled
Check .disabled file existence and report.
skills/[skill-name]/SKILL.mdskills/[skill-name]/CHANGELOG.md with a rollback noteUser said: "Always flag non-compete clauses over 12 months as high risk"
Evaluation:
Result: Add directly
User said: "Flag any non-market-standard indemnification clause"
Evaluation:
Action: Ask for clarification using the AskUserQuestion tool:
I detected a correction but need more details.
You said: "Flag any non-market-standard indemnification clause"
To make this actionable, can you specify:
- What makes an indemnification clause "non-market-standard"? (e.g., uncapped liability, coverage of indirect damages, no carve-outs for gross negligence)
Do you want to provide more details, or should I add the instruction as you stated it?
If user clarifies: Update the instruction and add it. If user prefers the original: Add the instruction as stated.