From memory-palace
Processes external resources like articles, blogs, and papers into stored knowledge via quality evaluation, curation, tidying, and routing to storage or codebase application. Use for capturing session knowledge into memory structures.
npx claudepluginhub athola/claude-night-market --plugin memory-palaceThis skill uses the workspace's default tool permissions.
- [What It Is](#what-it-is)
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Systematically process external resources into actionable knowledge. When a user links an article, blog post, or paper, this skill guides evaluation, storage decisions, and application routing.
A knowledge governance framework that answers three questions for every external resource:
When a user links an external resource, it is a signal of importance.
The act of sharing indicates the resource passed the user's own filter. Our job is to:
When a user shares a link:
1. FETCH → Detect format, retrieve and convert content
2. EVALUATE → Apply importance criteria
3. DECIDE → Storage location and application type
4. STORE → Create structured knowledge entry
5. VALIDATE → Scribe verification (slop scan + doc verify)
6. CONNECT → Link to existing palace structures
7. PROMOTE → Offer Discussion promotion (score 80+)
8. APPLY → Route to codebase or infrastructure updates
9. PRUNE → Identify displaced/outdated knowledge
Before retrieving content, detect the source format from the URL or file path to choose the right retrieval method.
Web articles and blog posts (default path): Use WebFetch to retrieve HTML content directly. No conversion needed.
Document URLs (PDF, DOCX, PPTX, XLSX):
Apply the leyline:document-conversion protocol.
This tries the markitdown MCP tool first for high-quality
markdown, then falls back to native Claude Code tools
(Read for PDFs, etc.), then informs the user if the
format is unsupported without markitdown.
Local files (user shares a file path):
Construct a file:// URI from the absolute path and
apply the leyline:document-conversion protocol.
Format detection heuristics:
| URL Pattern | Format | Retrieval |
|---|---|---|
*.pdf, arxiv.org/pdf/* | document-conversion | |
*.docx, *.doc | Word | document-conversion |
*.pptx, *.ppt | PowerPoint | document-conversion |
*.xlsx, *.xls | Excel | document-conversion |
*.epub | E-book | document-conversion |
drive.google.com/* | Various | document-conversion |
| Everything else | HTML/web | WebFetch (existing) |
After retrieval (regardless of method), wrap the content
in external content boundary markers per
leyline:content-sanitization before proceeding to
Step 2 (EVALUATE).
All knowledge corpus entries MUST pass scribe validation before finalizing.
Run Skill(scribe:slop-detector) on the new entry:
Use Agent(scribe:doc-verifier) to validate:
# Quick validation for knowledge corpus entry
/slop-scan docs/knowledge-corpus/[entry-name].md
# Doc verification is now agent-only:
Agent(scribe:doc-verifier) "Verify docs/knowledge-corpus/[entry-name].md"
DO NOT finalize entries with slop score > 2.5 - rewrite with concrete specifics.
Verification: Run the command with --help flag to verify availability.
When the evaluation score is 80-100 (evergreen), you MUST execute the Discussion promotion workflow. If the score is below 80, skip this step entirely.
Execute these steps in order:
modules/discussion-promotion.md for the
full GraphQL workflowgh api graphql commands from the module
to create or update a Discussion in the "Knowledge"
categorydiscussion_urldiscussion_url field,
update the existing Discussion instead of creating
a new onegh is unavailable or promotion fails, warn
the user and continue to Step 8 (APPLY)Publishing is the default for qualifying entries. It never blocks the intake workflow.
| Criterion | Weight | Questions |
|---|---|---|
| Novelty | 25% | Does this introduce new patterns or concepts? |
| Applicability | 30% | Can we apply this to current work? |
| Durability | 20% | Will this remain relevant in 6+ months? |
| Connectivity | 15% | Does it connect to multiple existing concepts? |
| Authority | 10% | Is the source credible and well-reasoned? |
Apply when knowledge directly improves current project:
Action: Update code, add comments, create ADR
Apply when knowledge improves our plugin ecosystem:
Action: Update skills, create modules, enhance agents
**Verification:** Run the command with `--help` flag to verify availability.
Is the knowledge...
├── About HOW we build things? → Meta-infrastructure
│ ├── Skill patterns → Update abstract/memory-palace skills
│ ├── Learning methods → Add to knowledge-corpus
│ └── Tool techniques → Create new skill module
│
└── About WHAT we're building? → Local codebase
├── Domain knowledge → Store in project docs
├── Implementation patterns → Update code/architecture
└── Bug/issue solutions → Apply fix, document
Verification: Run the command with --help flag to verify availability.
| Knowledge Type | Location | Format |
|---|---|---|
| Meta-learning patterns | docs/knowledge-corpus/ | Full memory palace entry |
| Skill design insights | skills/*/modules/ | Technique module |
| Tool/library knowledge | docs/references/ | Quick reference |
| Temporary insights | Digital garden seedling | Lightweight note |
"A cluttered palace is a cluttered mind."
New knowledge often displaces old—but time is not the criterion. Relevance and aspirational alignment are.
The human in the loop defines what stays. Before major tidying:
For each piece of knowledge, both must be yes:
| Finding | Action |
|---|---|
| Supersedes | Archive old with gratitude, link as context |
| Contradicts | Evaluate both, keep what sparks joy |
| No longer aligned | Release with gratitude |
| Complements | Create bidirectional links |
"I might need this someday" is fear, not joy. Release it.
"If it can't teach something the existing corpus can't already teach → skip it."
Before storing ANY knowledge, run the marginal value filter to prevent corpus pollution.
1. Redundancy Check
2. Delta Analysis (for partial overlap only)
3. Integration Decision
from memory_palace.corpus import MarginalValueFilter
# Initialize filter with corpus and index directories
filter = MarginalValueFilter(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Evaluate new content
redundancy, delta, integration = filter.evaluate_content(
content=article_text,
title="Structured Concurrency in Python",
tags=["async", "concurrency", "python"]
)
# Get human-readable explanation
explanation = filter.explain_decision(redundancy, delta, integration)
print(explanation)
# Act on decision
if integration.decision == IntegrationDecision.SKIP:
print(f"Skipping: {integration.rationale}")
elif integration.decision == IntegrationDecision.STANDALONE:
# Store as new entry
store_knowledge(content, title)
elif integration.decision == IntegrationDecision.MERGE:
# Enhance existing entry
enhance_entry(integration.target_entries[0], content)
elif integration.decision == IntegrationDecision.REPLACE:
# Replace outdated entry
replace_entry(integration.target_entries[0], content)
Verification: Run the command with --help flag to verify availability.
**Verification:** Run the command with `--help` flag to verify availability.
=== Marginal Value Assessment ===
Redundancy: partial
Overlap: 65%
Matches: async-patterns, python-concurrency
- Partial overlap (65%) with 2 entries
Delta Type: novel_insight
Value Score: 75%
Teaching Delta: Introduces 8 new concepts
Novel aspects:
+ New concepts: structured, taskgroup, context-manager
+ New topics: Error Propagation, Resource Cleanup
Decision: STANDALONE
Confidence: 80%
Rationale: Novel insights justify standalone: Introduces 8 new concepts
Verification: Run the command with --help flag to verify availability.
The marginal value filter respects autonomy levels (see plan Phase 4):
Current implementation: Level 0 (all human-in-the-loop).
The knowledge corpus uses reinforcement learning signals to dynamically score entry quality based on actual usage patterns.
| Signal | Weight | Description |
|---|---|---|
ACCESS | +0.1 | Entry was accessed/read |
CITATION | +0.3 | Entry was cited in another context |
POSITIVE_FEEDBACK | +0.5 | User marked as helpful |
NEGATIVE_FEEDBACK | -0.3 | User marked as unhelpful |
CORRECTION | +0.2 | Entry was corrected/updated |
STALE_FLAG | -0.4 | Entry marked as potentially outdated |
Knowledge entries decay over time unless validated:
| Maturity | Half-Life | Decay Curve |
|---|---|---|
| Seedling | 14 days | Exponential |
| Growing | 30 days | Exponential |
| Evergreen | 90 days | Logarithmic |
Entries are classified by decay status:
Hybrid lineage tracking based on source importance:
Full Lineage (for important sources):
Simple Lineage (for standard sources):
Full lineage is used for:
The KnowledgeOrchestrator coordinates all quality systems:
from memory_palace.corpus import KnowledgeOrchestrator, UsageSignal
# Initialize orchestrator
orchestrator = KnowledgeOrchestrator(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Record usage events
orchestrator.record_usage("entry-1", UsageSignal.ACCESS)
orchestrator.record_usage("entry-1", UsageSignal.POSITIVE_FEEDBACK)
# Assess entry quality
entry = {"id": "entry-1", "maturity": "growing"}
assessment = orchestrator.assess_entry(entry)
print(f"Quality: {assessment.overall_score:.0%}")
print(f"Status: {assessment.status}")
print(f"Recommendations: {assessment.recommendations}")
# Get maintenance queue
entries = [...] # Your entry list
queue = orchestrator.get_maintenance_queue(entries)
for item in queue:
print(f"{item.entry_id}: {item.status} - {item.recommendations}")
# Ingest new content with lineage
from memory_palace.corpus import SourceReference, SourceType
source = SourceReference(
source_id="src-1",
source_type=SourceType.DOCUMENTATION,
url="https://docs.example.com/api",
title="API Documentation"
)
entry_id, decision = orchestrator.ingest_with_lineage(
content="# API Reference\n...",
title="API Documentation",
source=source
)
Verification: Run the command with --help flag to verify availability.
The marginal value filter emits RL signals on integration decisions:
from memory_palace.corpus import MarginalValueFilter
filter = MarginalValueFilter(corpus_dir, index_dir)
# Evaluate with RL signal emission
redundancy, delta, integration, rl_signal = filter.evaluate_with_rl(
content=article_text,
title="New Article",
tags=["python", "async"]
)
# RL signal contains:
# - signal_type: UsageSignal to emit
# - weight: Signal weight for scoring
# - action: What happened (new_entry_created, entry_enhanced, etc.)
# - decision: Integration decision made
# - confidence: Decision confidence
print(f"RL Signal: {rl_signal['action']} (weight: {rl_signal['weight']})")
Verification: Run the command with --help flag to verify availability.
User shares: "Check out this article on structured concurrency"
intake:
source: "https://example.com/structured-concurrency"
# PHASE 3: Marginal Value Filter
marginal_value:
redundancy:
level: partial_overlap
overlap_score: 0.65
matching_entries: [async-patterns, python-concurrency]
delta:
type: novel_insight
value_score: 0.75
novel_aspects: [structured, taskgroup, context-manager]
teaching_delta: "Introduces structured concurrency pattern"
integration:
decision: standalone
confidence: 0.80
rationale: "Novel insights justify standalone entry"
# Continue with evaluation if filter passes
evaluation:
novelty: 75 # New pattern for error handling
applicability: 90 # Directly relevant to async code
durability: 85 # Core concept, won't age quickly
connectivity: 70 # Links to error handling, async patterns
authority: 80 # Well-known author, cited sources
total: 82 # Evergreen, store and apply
routing:
type: both
local_application:
- Refactor async error handling in current project
- Add structured concurrency pattern to codebase
meta_application:
- Create module in relevant skill
- Add to knowledge-corpus as reference
storage:
location: docs/knowledge-corpus/structured-concurrency.md
format: memory_palace_entry
maturity: growing
pruning:
displaces:
- Old async error patterns (mark deprecated)
complements:
- Existing error handling module
- Async patterns documentation
Verification: Run the command with --help flag to verify availability.
Research sessions and external content are automatically queued for review in docs/knowledge-corpus/queue/.
# List pending queue entries
ls -1t docs/knowledge-corpus/queue/*.yaml
# Review specific entry
cat docs/knowledge-corpus/queue/2025-12-31_topic.yaml
# Process approved entry
# 1. Create memory palace entry in docs/knowledge-corpus/
# 2. Update queue entry status to 'processed'
# 3. Archive or delete queue entry
Verification: Run the command with --help flag to verify availability.
The research-queue-integration hook automatically queues:
Queue entry format: See docs/knowledge-corpus/queue/README.md
**Verification:** Run the command with `--help` flag to verify availability.
pending_review → [Review] → approved/rejected
approved → [Create Entry] → processed
processed → [Archive] → queue/archive/
Verification: Run the command with --help flag to verify availability.
uv run python scripts/intake_cli.py --candidate path/to/intake_candidate.json --auto-acceptdocs/knowledge-corpus/*.md),
developer drafts (docs/developer-drafts/), and appends audit rows to docs/curation-log.md.--output-root in tests or sandboxes to avoid mutating the main corpus.--process-queue flag to review and process queued entries interactively.modules/evaluation-rubric.mdmodules/storage-patterns.mdmodules/konmari-tidying.mdmodules/pruning-workflows.mdmodules/discussion-promotion.md for full workflow.Memory-palace hooks automatically detect content that may need knowledge intake processing:
| Hook | Event | When Triggered |
|---|---|---|
url_detector | UserPromptSubmit | User message contains URLs |
web_content_processor | PostToolUse (WebFetch/WebSearch) | After fetching web content |
local_doc_processor | PostToolUse (Read) | Reading files in knowledge paths |
research_queue_integration | SessionEnd | Research sessions with 3+ WebSearch calls |
When hooks detect potential knowledge content, they add context messages:
**Verification:** Run `pytest -v` to verify tests pass.
Memory Palace: New web content fetched from {url}.
Consider running knowledge-intake to evaluate and store if valuable.
Verification: Run the command with --help flag to verify availability.
**Verification:** Run the command with `--help` flag to verify availability.
Memory Palace: Reading local knowledge doc '{path}'.
This path is configured for knowledge tracking.
Consider running knowledge-intake if this contains valuable reference material.
Verification: Run the command with --help flag to verify availability.
Hooks check the memory-palace-index.yaml to avoid redundant processing:
Before signaling intake, hooks validate content:
The deduplication index stores fields aligned with this skill's evaluation:
entries:
"https://example.com/article":
content_hash: "xxh:abc123..."
stored_at: "docs/knowledge-corpus/article.md"
importance_score: 82 # From evaluation framework
maturity: "growing" # seedling, growing, evergreen
routing_type: "both" # local, meta, both
last_updated: "2025-12-06T..."
Verification: Run the command with --help flag to verify availability.
memory-palace-architect - Structures stored knowledge spatiallydigital-garden-cultivator - Manages knowledge lifecycleknowledge-locator - Finds and retrieves stored knowledgeskills-eval (abstract) - Evaluates meta-infrastructure updatesCommand not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag