From sage
Captures AI agent mistakes, corrections, gotchas from failed commands, debugging, unexpected API behavior; searches prior learnings before tasks to avoid pitfalls.
npx claudepluginhub xoai/sageThis skill uses the workspace's default tool permissions.
Learn from mistakes. Don't repeat them.
Logs errors, user corrections, missing features, API failures, knowledge gaps, and best practices to .learnings/ markdown files. Promotes key insights to CLAUDE.md and AGENTS.md for AI agent self-improvement.
Logs errors, corrections, feature requests, API failures, knowledge gaps, and best practices to markdown files for continuous improvement. Triggers on failures, user feedback, or better approaches.
Saves and recalls learnings (bugs, patterns, gotchas, perf, config, arch) across Claude sessions locally or via shared GitHub projects. Auto-invokes after non-obvious bug fixes.
Share bugs, ideas, or general feedback.
Learn from mistakes. Don't repeat them.
Captures what went wrong, what was non-obvious, and what the agent should do differently. Every learning includes a prevention rule — a forward-looking instruction that changes future behavior.
Part of the unified knowledge system. Self-learning stores through
sage-memory (or files) with the self-learning tag / learning type.
During recall, learnings surface as warnings alongside regular knowledge.
| Capability | MCP | Files |
|---|---|---|
| Store learnings | ✅ sage_memory_store | ✅ .sage-memory/lrn-*.md files |
| Search learnings | ✅ BM25 + filter_tags | ⚠️ scan lrn- files by name |
| Update learnings | ✅ sage_memory_update | ✅ edit file |
| Delete learnings | ✅ sage_memory_delete | ✅ delete file |
| Browse by type | ✅ sage_memory_list | ✅ scan lrn- files |
| Link to entities | ✅ sage_memory_link | ❌ skip |
| Graph-based recall | ✅ sage_memory_graph | ❌ skip |
| Namespace isolation | ✅ filter_tags | ✅ lrn- filename prefix |
How to detect backend: At session start, call sage_memory_set_project
with the project root. If it responds, use MCP. If not, use
.sage-memory/ files.
At task start, search for learnings relevant to the current task.
Basic recall (keyword):
sage_memory_search(
query: "<task-relevant keywords>",
filter_tags: ["self-learning"],
limit: 5
)
Always include filter_tags: ["self-learning"] — this excludes all
non-learning entries.
Targeted recall (graph-based): When you know the current task's ontology entity ID:
sage_memory_graph(
id: "<task_entity_memory_id>",
relation: "applies_to",
direction: "inbound",
depth: 1
)
Returns learnings explicitly linked to this task — more precise than keyword search.
Hot spot detection:
sage_memory_graph(
id: "<module_entity_id>",
relation: "applies_to",
direction: "inbound",
depth: 1
)
If 5+ linked learnings → flag the area as mistake-prone.
Scan .sage-memory/ for lrn- prefixed files. Read filenames and
identify those relevant to the current task. Read matching files for
their prevention rules.
For a broad search: list all lrn-*.md files and scan names.
For a focused search: look for keywords in filenames like
lrn-stripe-webhook-*.md when working on Stripe webhooks.
When learnings are found, report the prevention rule, not the incident. Say: "Before working with Stripe webhooks, verify that body parsing middleware is skipped for the webhook route."
When nothing is found, say nothing.
| Type | Trigger |
|---|---|
gotcha | Non-obvious behavior discovered through debugging |
correction | User corrected the agent |
convention | Undocumented project/team pattern discovered |
api-drift | API/library behaves differently than expected |
error-fix | Recurring error with a known solution |
Title: [LRN:<type>] <specific description>
Content: Four-part structure:
With MCP:
sage_memory_store(
title: "[LRN:gotcha] Stripe webhook requires raw body before JSON parsing",
content: "What happened: Webhook signature verification failed with 400.
Why: Express body parser replaced raw body with parsed JSON.
What's correct: Use express.raw() for the webhook route.
Prevention: Before implementing any webhook handler that verifies
signatures, check whether the SDK requires the raw request body.",
tags: ["self-learning", "gotcha", "stripe", "webhooks"],
scope: "project"
)
With files:
File: .sage-memory/lrn-stripe-webhook-raw-body.md
---
tags: [self-learning, gotcha, stripe, webhooks]
type: learning
scope: project
created: 2026-03-20
---
[LRN:gotcha] Stripe webhook requires raw body before JSON parsing
What happened: Webhook signature verification failed with 400
"No signatures found matching the expected signature."
Why: Express body parser replaced raw body with parsed JSON before
the Stripe SDK could verify the signature.
What's correct: Use express.raw({type: 'application/json'})
middleware for the webhook route, before the global body parser.
Prevention: Before implementing any webhook handler that verifies
signatures (Stripe, GitHub, Twilio), check whether the SDK requires
the raw request body. If yes, ensure body parsing middleware is
skipped or deferred for that route.
After storing a learning, link it to the relevant entity:
sage_memory_link(
source_id: "<learning_memory_id>",
target_id: "<task_or_module_entity_id>",
relation: "applies_to"
)
With files: Skip linking. Mention the related entity in the content if the connection is important: "Related entity: task_a1b2 (Fix payment timeout)."
When you follow a stored self-learning entry and it leads to incorrect behavior (wrong library, outdated pattern, contradicted convention):
Store a NEW learning (type: correction) describing what the
original said, why it's now wrong, and what the correct approach is.
Invalidate the original:
sage_memory_update(id: "<original_id>", status: "invalidated")
Link the correction to the original:
sage_memory_link(
source_id: "<correction_id>",
target_id: "<original_id>",
relation: "corrects"
)
The original learning will never appear in search again. The correction replaces it as active knowledge. The graph edge preserves the audit trail.
With files: Rename the original file to lrn-INVALID-<name>.md and
add status: invalidated to its frontmatter. Create the correction as
a new file.
Before creating a new learning, check for existing similar learnings:
Search with the new learning's core content:
sage_memory_search(
query: "<what_happened + prevention_rule>",
filter_tags: ["self-learning"],
limit: 3
)
If the top result describes the same root cause and same prevention:
sage_memory_update(id: "<existing_id>", content: "<merged content>")
If no strong match → store as new.
Why: Three entries saying "check middleware order" waste search slots. One entry that gets richer over time is more useful.
With files: Scan lrn-*.md filenames for similar topics. If a
match exists, edit that file instead of creating a new one.
Ask: "Would this change how I approach a future task?"
Budget: 2-5 learnings per significant task.
Triggered by "sage review" or "review learnings."
sage_memory_list(tags: ["self-learning"]) → all learningssage_memory_list(tags: ["self-learning", "gotcha"]) etc.sage_memory_graph on key entities → count inbound
applies_to edges → report most mistake-prone areaslrn-*.md fileslrn-*.md files by domain keyword in filenameProject → Global: Learning applies beyond this codebase. Store a context-independent version at global scope.
With MCP: sage_memory_store(..., scope: "global")
With files: Copy to ~/.sage-memory/ (global directory), remove
project-specific details.
Global → Team: Export to a shared file in the repo. Read:
references/team-sharing.md.
Read: references/promotion-rules.md for criteria.
Prevention over documentation. Every learning answers: "What should I check before this happens again?"
Specificity retrieves. [LRN:gotcha] Stripe webhook requires raw body
retrieves. [LRN:gotcha] API issue does not.
Freshness matters. Update or delete when code changes make a learning obsolete.
Learnings are not memories. "Billing uses saga pattern" is a memory. "Agent assumed REST, broke the compensation chain" is a learning.
references/capture-patterns.md — Triggers, examples, prevention rulesreferences/storage-conventions.md — Format conventionsreferences/promotion-rules.md — Scope escalation criteriareferences/team-sharing.md — Export formats for teamsreferences/review-workflow.md — Curation processreferences/examples.md — End-to-end scenariosreferences/ontology-integration.md — Graph integration