Process external resources (articles, blog posts, papers) into actionable knowledge with systematic evaluation, storage, and application decisions. Triggers: knowledge intake, article evaluation, paper review, external resource, knowledge curation, content assessment, importance scoring, storage decision Use when: user shares links to external resources, evaluating articles for storage, routing knowledge to appropriate palaces or gardens DO NOT use when: searching existing knowledge - use knowledge-locator. DO NOT use when: designing new palace structures - use memory-palace-architect. Use this skill when processing any external knowledge source.
/plugin marketplace add athola/claude-night-market/plugin install memory-palace@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
modules/evaluation-rubric.mdmodules/konmari-tidying.mdmodules/pruning-workflows.mdmodules/storage-patterns.mdprompts/marginal_value_dual.mdscripts/__init__.pyscripts/intake_cli.pySystematically process external resources into actionable knowledge. When a user links an article, blog post, or paper, this skill guides evaluation, storage decisions, and application routing.
A knowledge governance framework that answers three questions for every external resource:
When a user links an external resource, it is a signal of importance.
The act of sharing indicates the resource passed the user's own filter. Our job is to:
When a user shares a link:
1. FETCH → Retrieve and parse the content
2. EVALUATE → Apply importance criteria
3. DECIDE → Storage location and application type
4. STORE → Create structured knowledge entry
5. CONNECT → Link to existing palace structures
6. APPLY → Route to codebase or infrastructure updates
7. PRUNE → Identify displaced/outdated knowledge
| Criterion | Weight | Questions |
|---|---|---|
| Novelty | 25% | Does this introduce new patterns or concepts? |
| Applicability | 30% | Can we apply this to current work? |
| Durability | 20% | Will this remain relevant in 6+ months? |
| Connectivity | 15% | Does it connect to multiple existing concepts? |
| Authority | 10% | Is the source credible and well-reasoned? |
Apply when knowledge directly improves current project:
Action: Update code, add comments, create ADR
Apply when knowledge improves our plugin ecosystem:
Action: Update skills, create modules, enhance agents
Is the knowledge...
├── About HOW we build things? → Meta-infrastructure
│ ├── Skill patterns → Update abstract/memory-palace skills
│ ├── Learning methods → Add to knowledge-corpus
│ └── Tool techniques → Create new skill module
│
└── About WHAT we're building? → Local codebase
├── Domain knowledge → Store in project docs
├── Implementation patterns → Update code/architecture
└── Bug/issue solutions → Apply fix, document
| Knowledge Type | Location | Format |
|---|---|---|
| Meta-learning patterns | docs/knowledge-corpus/ | Full memory palace entry |
| Skill design insights | skills/*/modules/ | Technique module |
| Tool/library knowledge | docs/references/ | Quick reference |
| Temporary insights | Digital garden seedling | Lightweight note |
"A cluttered palace is a cluttered mind."
New knowledge often displaces old—but time is not the criterion. Relevance and aspirational alignment are.
The human in the loop defines what stays. Before major tidying:
For each piece of knowledge, both must be yes:
| Finding | Action |
|---|---|
| Supersedes | Archive old with gratitude, link as context |
| Contradicts | Evaluate both, keep what sparks joy |
| No longer aligned | Release with gratitude |
| Complements | Create bidirectional links |
"I might need this someday" is fear, not joy. Release it.
"If it can't teach something the existing corpus can't already teach → skip it."
Before storing ANY knowledge, run the marginal value filter to prevent corpus pollution.
1. Redundancy Check
2. Delta Analysis (for partial overlap only)
3. Integration Decision
from memory_palace.corpus import MarginalValueFilter
# Initialize filter with corpus and index directories
filter = MarginalValueFilter(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Evaluate new content
redundancy, delta, integration = filter.evaluate_content(
content=article_text,
title="Structured Concurrency in Python",
tags=["async", "concurrency", "python"]
)
# Get human-readable explanation
explanation = filter.explain_decision(redundancy, delta, integration)
print(explanation)
# Act on decision
if integration.decision == IntegrationDecision.SKIP:
print(f"Skipping: {integration.rationale}")
elif integration.decision == IntegrationDecision.STANDALONE:
# Store as new entry
store_knowledge(content, title)
elif integration.decision == IntegrationDecision.MERGE:
# Enhance existing entry
enhance_entry(integration.target_entries[0], content)
elif integration.decision == IntegrationDecision.REPLACE:
# Replace outdated entry
replace_entry(integration.target_entries[0], content)
=== Marginal Value Assessment ===
Redundancy: partial
Overlap: 65%
Matches: async-patterns, python-concurrency
- Partial overlap (65%) with 2 entries
Delta Type: novel_insight
Value Score: 75%
Teaching Delta: Introduces 8 new concepts
Novel aspects:
+ New concepts: structured, taskgroup, context-manager
+ New topics: Error Propagation, Resource Cleanup
Decision: STANDALONE
Confidence: 80%
Rationale: Novel insights justify standalone: Introduces 8 new concepts
The marginal value filter respects autonomy levels (see plan Phase 4):
Current implementation: Level 0 (all human-in-the-loop).
The knowledge corpus uses reinforcement learning signals to dynamically score entry quality based on actual usage patterns.
| Signal | Weight | Description |
|---|---|---|
ACCESS | +0.1 | Entry was accessed/read |
CITATION | +0.3 | Entry was cited in another context |
POSITIVE_FEEDBACK | +0.5 | User marked as helpful |
NEGATIVE_FEEDBACK | -0.3 | User marked as unhelpful |
CORRECTION | +0.2 | Entry was corrected/updated |
STALE_FLAG | -0.4 | Entry marked as potentially outdated |
Knowledge entries decay over time unless validated:
| Maturity | Half-Life | Decay Curve |
|---|---|---|
| Seedling | 14 days | Exponential |
| Growing | 30 days | Exponential |
| Evergreen | 90 days | Logarithmic |
Entries are classified by decay status:
Hybrid lineage tracking based on source importance:
Full Lineage (for important sources):
Simple Lineage (for standard sources):
Full lineage is used for:
The KnowledgeOrchestrator coordinates all quality systems:
from memory_palace.corpus import KnowledgeOrchestrator, UsageSignal
# Initialize orchestrator
orchestrator = KnowledgeOrchestrator(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Record usage events
orchestrator.record_usage("entry-1", UsageSignal.ACCESS)
orchestrator.record_usage("entry-1", UsageSignal.POSITIVE_FEEDBACK)
# Assess entry quality
entry = {"id": "entry-1", "maturity": "growing"}
assessment = orchestrator.assess_entry(entry)
print(f"Quality: {assessment.overall_score:.0%}")
print(f"Status: {assessment.status}")
print(f"Recommendations: {assessment.recommendations}")
# Get maintenance queue
entries = [...] # Your entry list
queue = orchestrator.get_maintenance_queue(entries)
for item in queue:
print(f"{item.entry_id}: {item.status} - {item.recommendations}")
# Ingest new content with lineage
from memory_palace.corpus import SourceReference, SourceType
source = SourceReference(
source_id="src-1",
source_type=SourceType.DOCUMENTATION,
url="https://docs.example.com/api",
title="API Documentation"
)
entry_id, decision = orchestrator.ingest_with_lineage(
content="# API Reference\n...",
title="API Documentation",
source=source
)
The marginal value filter emits RL signals on integration decisions:
from memory_palace.corpus import MarginalValueFilter
filter = MarginalValueFilter(corpus_dir, index_dir)
# Evaluate with RL signal emission
redundancy, delta, integration, rl_signal = filter.evaluate_with_rl(
content=article_text,
title="New Article",
tags=["python", "async"]
)
# RL signal contains:
# - signal_type: UsageSignal to emit
# - weight: Signal weight for scoring
# - action: What happened (new_entry_created, entry_enhanced, etc.)
# - decision: Integration decision made
# - confidence: Decision confidence
print(f"RL Signal: {rl_signal['action']} (weight: {rl_signal['weight']})")
User shares: "Check out this article on structured concurrency"
intake:
source: "https://example.com/structured-concurrency"
# PHASE 3: Marginal Value Filter
marginal_value:
redundancy:
level: partial_overlap
overlap_score: 0.65
matching_entries: [async-patterns, python-concurrency]
delta:
type: novel_insight
value_score: 0.75
novel_aspects: [structured, taskgroup, context-manager]
teaching_delta: "Introduces structured concurrency pattern"
integration:
decision: standalone
confidence: 0.80
rationale: "Novel insights justify standalone entry"
# Continue with evaluation if filter passes
evaluation:
novelty: 75 # New pattern for error handling
applicability: 90 # Directly relevant to async code
durability: 85 # Core concept, won't age quickly
connectivity: 70 # Links to error handling, async patterns
authority: 80 # Well-known author, cited sources
total: 82 # Evergreen, store and apply
routing:
type: both
local_application:
- Refactor async error handling in current project
- Add structured concurrency pattern to codebase
meta_application:
- Create module in relevant skill
- Add to knowledge-corpus as reference
storage:
location: docs/knowledge-corpus/structured-concurrency.md
format: memory_palace_entry
maturity: growing
pruning:
displaces:
- Old async error patterns (mark deprecated)
complements:
- Existing error handling module
- Async patterns documentation
uv run python skills/knowledge-intake/scripts/intake_cli.py --candidate path/to/intake_candidate.json --auto-acceptdocs/knowledge-corpus/*.md),
developer drafts (docs/developer-drafts/), and appends audit rows to docs/curation-log.md.--output-root in tests or sandboxes to avoid mutating the main corpus.modules/evaluation-rubric.mdmodules/storage-patterns.mdmodules/konmari-tidying.mdmodules/pruning-workflows.mdMemory-palace hooks automatically detect content that may need knowledge intake processing:
| Hook | Event | When Triggered |
|---|---|---|
url_detector | UserPromptSubmit | User message contains URLs |
web_content_processor | PostToolUse (WebFetch/WebSearch) | After fetching web content |
local_doc_processor | PostToolUse (Read) | Reading files in knowledge paths |
When hooks detect potential knowledge content, they add context messages:
Memory Palace: New web content fetched from {url}.
Consider running knowledge-intake to evaluate and store if valuable.
Memory Palace: Reading local knowledge doc '{path}'.
This path is configured for knowledge tracking.
Consider running knowledge-intake if this contains valuable reference material.
Hooks check the memory-palace-index.yaml to avoid redundant processing:
Before signaling intake, hooks validate content:
The deduplication index stores fields aligned with this skill's evaluation:
entries:
"https://example.com/article":
content_hash: "xxh:abc123..."
stored_at: "docs/knowledge-corpus/article.md"
importance_score: 82 # From evaluation framework
maturity: "growing" # seedling, growing, evergreen
routing_type: "both" # local, meta, both
last_updated: "2025-12-06T..."
memory-palace-architect - Structures stored knowledge spatiallydigital-garden-cultivator - Manages knowledge lifecycleknowledge-locator - Finds and retrieves stored knowledgeskills-eval (abstract) - Evaluates meta-infrastructure updates