Manage memory citations, verify code references, and track confidence scores. Use when adding citations to memories, checking memory health, or verifying code references are still valid.
Manages memory citations and verifies code references to ensure accuracy and track confidence scores.
/plugin marketplace add rjmurillo/ai-agents/plugin install project-toolkit@ai-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Manage citations, verify code references, and track confidence scores for Serena memories. Ensures memories stay accurate by linking them to specific code locations and detecting when those locations change.
add citation to memory - Link memory to specific code locationverify memory citations - Check if code references are still validcheck memory health - Generate staleness report across all memoriesupdate memory confidence - Recalculate trust score based on verification| Input | Output | Duration |
|---|---|---|
| Memory ID + code reference | Citation added with validation | < 5 seconds |
| Memory directory | Health report with stale memories | < 30 seconds |
| Verification results | Updated confidence scores | < 10 seconds |
Need memory enhancement?
│
├─ Add citation to memory → add-citation command
├─ Verify citations → verify or verify-all command
├─ Check memory health → health command
├─ Traverse memory graph → graph command
└─ Update confidence → update-confidence command
Locate memory file by ID or path:
.serena/memories/ for <memory-id>.mdVerification: Memory file exists and is readable
Use CLI commands with structured output:
python -m memory_enhancement add-citation <memory-id> --file <path> --line <num> --snippet <text>
Parameters:
memory-id - Memory identifier or file path--file - Relative file path from repository root (required)--line - Line number (1-indexed, optional for file-level citations)--snippet - Code snippet for fuzzy matching (optional)--dry-run - Preview changes without writing (optional)Exit Codes (ADR-035):
# Single memory
python -m memory_enhancement verify <memory-id> [--json]
# All memories
python -m memory_enhancement verify-all [--dir .serena/memories] [--json]
Output Indicators:
Verification: Citations validated against current codebase state
Recalculate based on verification results:
python -m memory_enhancement update-confidence <memory-id>
Confidence Calculation:
confidence = valid_citations / total_citations
Interpretation:
| Score Range | Meaning | Action |
|---|---|---|
| 0.9 - 1.0 | High confidence | Trust memory, use in decisions |
| 0.7 - 0.9 | Medium confidence | Review stale citations |
| 0.5 - 0.7 | Low confidence | Update memory or mark obsolete |
| 0.0 - 0.5 | Very low confidence | Memory likely outdated |
| No citations | Default (0.5) | Add citations to improve confidence |
Verification: Confidence score updated in YAML frontmatter
Display summary with actionable recommendations:
python -m memory_enhancement list-citations <memory-id> [--json]
Human-readable output:
Citations for memory-001:
Total: 3
✅ src/api.py:42
Snippet: handleError
❌ src/client.ts:100
Reason: Line 100 exceeds file length (95 lines)
✅ scripts/test.py
JSON output for programmatic usage:
{
"citations": [
{
"path": "src/api.py",
"line": 42,
"snippet": "handleError",
"valid": true,
"mismatch_reason": null,
"verified": "2026-01-24T14:30:00"
}
]
}
python -m memory_enhancement health [--format markdown|json] [--include-graph]
Generates comprehensive report with:
Verification: Report generated successfully
| Operation | CLI Command | Key Parameters |
|---|---|---|
| Add citation | python -m memory_enhancement add-citation | <memory-id>, --file, --line, --snippet |
| Verify memory | python -m memory_enhancement verify | <memory-id>, --json |
| Verify all | python -m memory_enhancement verify-all | --dir, --json |
| Health report | python -m memory_enhancement health | --json, --markdown, --summary |
| Update confidence | python -m memory_enhancement update-confidence | <memory-id> |
| List citations | python -m memory_enhancement list-citations | <memory-id>, --json |
| Graph traversal | python -m memory_enhancement graph | <root-id>, --strategy, --max-depth |
| Avoid | Why | Instead |
|---|---|---|
| Adding citations without verifying file exists | Adds invalid citations immediately | Let CLI validate on add |
| Skipping confidence updates after verification | Confidence becomes stale | Run update-confidence after big code changes |
| Using absolute paths | Breaks on different machines | Use repo-relative paths |
| Adding duplicate citations | Clutters frontmatter | CLI automatically updates existing citations |
| Forgetting to verify after refactoring | Citations go stale silently | Run verify-all regularly or in CI |
Run batch health checks with exemption support:
# Full report (human-readable)
python -m memory_enhancement health [--dir .serena/memories] [--repo-root .]
# JSON output (for CI parsing)
python -m memory_enhancement health --json
# Markdown output (for PR comments)
python -m memory_enhancement health --markdown
# Summary only
python -m memory_enhancement health --summary
Status Indicators:
exempt: true in frontmatter (skips verification)Exemption Mechanism:
Add exempt: true to a memory's YAML frontmatter to exclude it from staleness checks.
Use this for memories that reference external resources or intentionally static content.
---
id: historical-context
subject: Project History
exempt: true
---
Exit Codes (ADR-035):
The .github/workflows/memory-health.yml workflow runs health checks on all PRs:
.serena/memories/** and memory enhancement code<!-- MEMORY-HEALTH --> marker for idempotent comment updatesThe .github/workflows/citation-verify.yml verifies individual citations:
Replaces the example below with the actual deployed workflow.
name: Memory Citation Validation
on:
pull_request:
paths:
- '.serena/memories/**'
- 'src/**'
- 'scripts/**'
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@40c6b50cc6aa807e2d020b243100c016221d604c # v5.3.0
with:
python-version: '3.12'
- run: pip install -e .
- run: python -m memory_enhancement verify-all --json > results.json
continue-on-error: true
- run: cat results.json
- name: Comment on PR
if: failure()
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
script: |
const results = require('./results.json');
const stale = results.filter(r => !r.valid);
if (stale.length > 0) {
await github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.name,
body: `⚠️ ${stale.length} stale memory citation(s) detected. Run \`python -m memory_enhancement verify-all\` locally for details.`
});
}
Initially set continue-on-error: true (warning only). After adoption, make blocking.
After using this skill:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.