Extract learnings from session corrections and patterns, update skill files with persistent memory. Implements Loop 1.5 - per-session micro-learning between execution and meta-optimization.
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdexamples/reflection-output.mdtests/test-signal-detection.mdBefore writing ANY code, you MUST check:
.claude/library/catalog.json.claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.mdD:\Projects\*| Match | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern exists | FOLLOW pattern |
| In project | EXTRACT |
| No match | BUILD (add to library after) |
The Reflect skill solves a fundamental limitation of LLMs: they don't learn from session to session. Every conversation starts from zero, causing the same mistakes to recur and forcing users to repeat corrections endlessly.
Philosophy: Corrections are signals. Approvals are confirmations. Both should be captured, classified, and persisted into skill files where they become permanent knowledge that survives across sessions.
Methodology: 7-phase extraction and update pipeline that:
Value Proposition: Correct once, never again. Transform ephemeral session corrections into persistent skill improvements that compound over time.
The Reflect skill operates on 5 core principles:
Corrections are the strongest learning signals. When a user says "No, use X instead", this is more valuable than explicit instructions because it reveals a gap between expectation and delivery.
In practice:
All learnings must have VERIX-aligned confidence ceilings. Don't overclaim certainty from limited evidence.
In practice:
Store learnings in SKILL.md, not in embeddings. Skill files are human-readable, version-controlled, and immediately effective.
In practice:
Preview all changes before applying. HIGH confidence changes require explicit approval; automation only for MEDIUM/LOW.
In practice:
Session learnings aggregate into system optimization. Micro-learning feeds macro-optimization.
In practice:
Use Reflect when:
Do NOT use Reflect when:
Purpose: Scan conversation for learning signals Agent: intent-parser (from registry)
Input Contract:
inputs:
conversation_context: string # Full session transcript
invoked_skills: list[string] # Skills used in session
Process:
Signal Types:
| Type | Pattern | Confidence |
|---|---|---|
| Correction | "No, use X", "That's wrong", "Actually..." | HIGH (0.90) |
| Explicit Rule | "Always do X", "Never do Y" | HIGH (0.90) |
| Approval | "Perfect", "Yes, exactly", "That's right" | MEDIUM (0.75) |
| Rejection | User rejected proposed solution | MEDIUM (0.75) |
| Style Cue | Formatting or naming preferences | LOW (0.55) |
| Observation | Implicit preference detected | LOW (0.55) |
Output Contract:
outputs:
signals: list[Signal]
Signal:
type: correction|explicit_rule|approval|rejection|style_cue|observation
content: string # The actual learning
context: string # Surrounding context
confidence: float # 0.55-0.90
ground: string # Evidence source
Purpose: Map signals to the skills they apply to Agent: skill-mapper (custom logic)
Process:
Output Contract:
outputs:
skill_signals: dict[skill_name, list[Signal]]
Purpose: Apply VERIX-aligned confidence levels Agent: prompt-architect patterns
Classification Rules:
HIGH [conf:0.90] = Explicit "never/always" rules
Direct corrections with clear alternative
User used emphatic language
MEDIUM [conf:0.75] = Successful patterns (2+ confirmations)
Single strong approval
Rejection with implicit preference
LOW [conf:0.55] = Single observations
Style cues without explicit statement
Inferred preferences
Ceiling Enforcement:
Purpose: Generate proposed skill file updates Agent: skill-forge patterns
Process:
Output Format:
## Proposed Updates
**Skill: {skill_name}** (v{old} -> v{new})
### Signals Detected
- {count} corrections (HIGH)
- {count} approvals (MEDIUM)
- {count} observations (LOW)
### Diff Preview
```diff
+ ### High Confidence [conf:0.90]
+ - {learning content} [ground:{source}:{date}]
reflect({skill}): [{LEVEL}] {description}
[Y] Accept [N] Reject [E] Edit with natural language
#### Phase 5: Apply Updates
**Purpose**: Safely update skill files
**Agent**: skill-forge
**Process**:
1. If approved (manual) or auto-mode enabled:
2. Read skill file
3. Find or create LEARNED PATTERNS section
4. Append new learnings under appropriate confidence level
5. Increment x-version in frontmatter
6. Set x-last-reflection to current timestamp
7. Increment x-reflection-count
8. Write updated file
**LEARNED PATTERNS Section Format**:
```markdown
## LEARNED PATTERNS
### High Confidence [conf:0.90]
- ALWAYS check for SQL injection vulnerabilities [ground:user-correction:2026-01-05]
- NEVER use inline styles in components [ground:user-correction:2026-01-03]
### Medium Confidence [conf:0.75]
- Prefer async/await over .then() chains [ground:approval-pattern:3-sessions]
- Use descriptive variable names in examples [ground:approval-pattern:2-sessions]
### Low Confidence [conf:0.55]
- User may prefer verbose error messages [ground:observation:1-session]
Purpose: Persist learnings for Meta-Loop aggregation Agent: memory-mcp integration
Storage Format:
{
"WHO": "reflect-skill:{session_id}",
"WHEN": "{ISO8601_timestamp}",
"PROJECT": "{project_name}",
"WHY": "session-learning",
"x-skill": "{skill_name}",
"x-version-before": "{old_version}",
"x-version-after": "{new_version}",
"x-signals": {
"corrections": 2,
"approvals": 1,
"observations": 1
},
"x-learnings": [
{
"content": "ALWAYS check for SQL injection",
"confidence": 0.90,
"ground": "user-correction",
"category": "HIGH"
}
]
}
Storage Path: sessions/reflect/{project}/{skill}/{timestamp}
Purpose: Version the skill evolution Agent: bash git commands
Commit Format:
reflect({skill_name}): [{LEVEL}] {description}
- Added {n} learnings from session
- Confidence levels: HIGH:{n}, MEDIUM:{n}, LOW:{n}
- Evidence: user-correction, approval-pattern, observation
Generated by reflect skill v1.0.0
Different session types require different reflection approaches:
Patterns: "bug", "fix", "error", "not working" Common Corrections: Framework choice, error handling patterns, edge cases Key Focus: What was the root cause? What pattern prevents recurrence? Approach: Extract diagnostic insights and prevention rules
Patterns: "review", "check", "looks good", "change this" Common Corrections: Style violations, security concerns, naming Key Focus: What standards emerged? What was consistently flagged? Approach: Extract style rules and security patterns
Patterns: "build", "create", "implement", "add" Common Corrections: Architecture choices, component usage, API patterns Key Focus: What design decisions worked? What was rejected? Approach: Extract architectural preferences and component rules
Patterns: "document", "explain", "readme", "describe" Common Corrections: Tone, structure, level of detail Key Focus: What style resonated? What format preferred? Approach: Extract documentation style guide entries
Track signals across sessions to identify recurring patterns:
Learn from what was NOT corrected:
When correcting skill A, check impact on skills that depend on it:
Handle contradictory signals:
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Over-Learning | Capturing every small preference | Only persist signals that appear 2+ times or are explicit rules |
| Under-Confidence | All learnings at LOW confidence | Explicit "always/never" statements are HIGH; don't downgrade |
| Eval-Harness Modification | Attempting to update frozen harness | BLOCK: eval-harness never self-improves |
| Silent Updates | Applying changes without preview | ALWAYS show diff and require confirmation for HIGH |
| Orphan Learnings | Storing in Memory but not SKILL.md | Write to BOTH: skill file for immediate effect, Memory for aggregation |
| Version Skip | Not incrementing x-version | ALWAYS bump version on any skill file change |
Full Mode (default for manual /reflect):
Quick Mode (/reflect --quick or auto mode):
When to ask user:
When to auto-apply:
| Skill | When Used Before | What It Provides |
|---|---|---|
| intent-analyzer | Before reflection | Parsed user intent for signal context |
| prompt-architect | For constraint classification | HARD/SOFT/INFERRED distinction |
| Skill | When Used After | What It Does |
|---|---|---|
| skill-forge | After signal classification | Applies safe SKILL.md updates |
| bootstrap-loop | During Meta-Loop | Aggregates learnings for optimization |
| Skill | When Used Together | How They Coordinate |
|---|---|---|
| memory-manager | During storage phase | Stores in Memory MCP |
| github-integration | During commit phase | Handles git operations |
Required:
Optional:
inputs:
# Required
trigger: manual | automatic # How reflect was invoked
# Optional
skill_name: string # Target specific skill (else detect from session)
mode: full | quick # Reflection depth
auto_apply: boolean # Skip approval for MEDIUM/LOW (requires reflect-on)
outputs:
# Always returned
signals_detected: list[Signal]
skills_updated: list[string]
learnings_stored: list[MemoryKey]
# If changes made
skill_diffs: dict[skill_name, diff_preview]
version_changes: dict[skill_name, {old: string, new: string}]
# If git enabled
commit_hash: string
commit_message: string
Role in Meta-Loop: The reflect skill is ITSELF subject to Meta-Loop optimization:
Eval Harness Integration:
Memory Namespace:
skills/tooling/reflect/{project}/{timestamp}User: /reflect
Claude: Analyzing session for learning signals...
## Session Reflection Report
### Signals Detected
- 2 corrections (HIGH confidence)
- 1 approval (MEDIUM confidence)
- 1 observation (LOW confidence)
### Proposed Updates
**Skill: debug** (v2.1.0 -> v2.1.1)
```diff
+ ## LEARNED PATTERNS
+
+ ### High Confidence [conf:0.90]
+ - ALWAYS check for null pointer exceptions before accessing object properties [ground:user-correction:2026-01-05]
+ - Use structured logging instead of console.log in production code [ground:user-correction:2026-01-05]
+
+ ### Medium Confidence [conf:0.75]
+ - Prefer try-catch blocks around async operations [ground:approval-pattern:2026-01-05]
+
+ ### Low Confidence [conf:0.55]
+ - User may prefer detailed stack traces in error messages [ground:observation:2026-01-05]
reflect(debug): [HIGH] Add null check and structured logging rules
[Y] Accept [N] Reject [E] Edit with natural language
User: Y
Claude: Updates applied successfully.
Confidence: 0.85 (ceiling: observation 0.95)
#### Example 2: Automatic Reflection via Stop Hook
[Session ending - reflect-on is enabled]
Detected 3 learning signals across 2 skills.
Skill: code-review
Skill: tester
Auto-applied 3 MEDIUM/LOW learnings. Stored in Memory MCP. Git commit: reflect(code-review,tester): [MEDIUM] Session learnings
Next session will include these patterns.
#### Example 3: Targeting Specific Skill
User: /reflect code-review
Claude: Reflecting on code-review skill specifically...
Found 4 signals:
Propose adding to skills/quality/code-review/SKILL.md? [Y] Accept [N] Reject [E] Edit
### Troubleshooting
| Problem | Solution |
|---------|----------|
| **No signals detected** | Ensure conversation had corrections or approvals. Trivial sessions may not have learnings. |
| **Skill not found** | Verify skill was invoked via Skill() or /command. Check skill-index.json. |
| **Memory MCP unavailable** | Learnings still applied to skill files. Retry memory storage later. |
| **Git commit failed** | Check git status. Ensure no merge conflicts. Manual commit may be needed. |
| **Conflicting learnings** | User must resolve. Show both versions and ask which to keep. |
| **Permission denied on skill file** | Check file permissions. May need elevated access. |
| **x-version not incrementing** | Ensure YAML frontmatter is valid. Check for parsing errors. |
### Conclusion
The Reflect skill transforms ephemeral session corrections into persistent knowledge by implementing **Loop 1.5** - a per-session micro-learning layer that bridges immediate execution (Loop 1) and long-term optimization (Loop 3).
Key capabilities:
- **Signal Detection**: Automatically identifies corrections, approvals, and patterns
- **Confidence Classification**: VERIX-aligned levels (HIGH/MEDIUM/LOW) prevent overclaiming
- **Safe Updates**: Preview-first approach with approval gates for critical changes
- **Memory Integration**: Feeds Meta-Loop for system-wide optimization
- **Version Control**: Git tracking enables rollback and evolution analysis
By capturing learnings at the session level and persisting them in skill files, the Reflect skill enables a self-improving development experience where corrections compound into expertise over time.
### Completion Verification
- [x] YAML frontmatter with x-version, x-category, x-vcl-compliance
- [x] Overview with philosophy, methodology, value proposition
- [x] Core Principles (5 principles with "In practice" items)
- [x] When to Use with use/don't-use criteria
- [x] Main Workflow with 7 phases, agents, input/output contracts
- [x] Pattern Recognition for different session types
- [x] Advanced Techniques (multi-session, negative space, dependencies, conflicts)
- [x] Common Anti-Patterns table with Problem/Solution
- [x] Practical Guidelines for full/quick modes
- [x] Cross-Skill Coordination (upstream/downstream/parallel)
- [x] MCP Requirements with WHY explanations
- [x] Input/Output Contracts in YAML
- [x] Recursive Improvement integration
- [x] Examples (3 complete scenarios)
- [x] Troubleshooting table
- [x] Conclusion summarizing value
- [x] Completion Verification checklist
Confidence: 0.85 (ceiling: observation 0.95) - New skill created following Skill Forge v3.2 required sections with full Tier 1-4 coverage.
---
## LEARNED PATTERNS
### High Confidence [conf:0.90] - CRITICAL
#### Skill Package Format
- Skills use `.skill` extension (NOT `.skill.zip`). The `.skill` file IS a zip archive with renamed extension.
- Correct location: `skills/packaged/` folder (NOT `skills/dist/`)
- [ground:user-correction:2026-01-08]
#### Multi-File Update Workflow
When updating a packaged skill with learned patterns, update ALL relevant files:
| File | Update Required | Content |
|------|-----------------|---------|
| SKILL.md | Always | Add to LEARNED PATTERNS section |
| CHANGELOG.md | Always | Add version entry with date and description |
| manifest.json | Always | Increment version number |
| quick-reference.md | If operational | Add new tips/workflows |
| readme.md | If scope changes | Update overview |
**Complete Workflow:**
```bash
# 1. Unzip .skill file
unzip skills/packaged/skill-name.skill -d /tmp/skill-update/skill-name
# 2. Update files: SKILL.md, CHANGELOG.md, manifest.json, quick-reference.md
# 3. Rezip with PowerShell (use cygpath for Windows paths)
WIN_PATH=$(cygpath -w /tmp/skill-update/skill-name)
WIN_ZIP=$(cygpath -w /tmp/skill-update/skill-name.zip)
powershell -Command "Compress-Archive -Path '$WIN_PATH\*' -DestinationPath '$WIN_ZIP' -Force"
# 4. Deploy with .skill extension
cp /tmp/skill-update/skill-name.zip skills/packaged/skill-name.skill
[ground:user-correction:2026-01-08]
cygpath -w /unix/path to convert Git Bash paths before invoking PowerShell Compress-Archive [ground:error-correction:2026-01-08]cat > file << 'EOF') as reliable alternative for file writes [ground:observation:pattern:2026-01-08]When maintaining Context Cascade component counts and discovery indexes:
| File | Update Action |
|---|---|
context-cascade/CLAUDE.md | Update component count table |
discovery/SKILL-INDEX.md | Add skill entries with category tables |
scripts/skill-index/skill-index.json | Add skill routing data |
~/.claude/CLAUDE.md | Sync total counts |
# Count core skills
find "/c/Users/17175/claude-code-plugins/context-cascade/skills" -name "SKILL.md" | wc -l
# Count supplementary skills
find "/c/Users/17175/.claude/skills" -name "*.md" | wc -l
# Count agents
find "/c/Users/17175/claude-code-plugins/context-cascade/agents" -name "*.md" | wc -l
# Count commands
find "/c/Users/17175/claude-code-plugins/context-cascade/commands" -name "*.md" | wc -l
build-skill-index.py, generate-index.js) only scan core skills/ directory.claude/skills/ require manual addition OR script updateThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.