Process external resources into actionable knowledge with evaluation, storage, and application decisions. knowledge intake, article evaluation, paper review, external resource Use when: user shares links to articles, papers, or external resources DO NOT use when: searching existing knowledge - use knowledge-locator.
Processes external resources into structured knowledge with evaluation, storage, and application routing.
/plugin marketplace add athola/claude-night-market/plugin install memory-palace@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
modules/discussion-promotion.mdmodules/evaluation-rubric.mdmodules/konmari-tidying.mdmodules/pruning-workflows.mdmodules/storage-patterns.mdprompts/marginal_value_dual.mdscripts/__init__.pyscripts/intake_cli.pySystematically process external resources into actionable knowledge. When a user links an article, blog post, or paper, this skill guides evaluation, storage decisions, and application routing.
A knowledge governance framework that answers three questions for every external resource:
When a user links an external resource, it is a signal of importance.
The act of sharing indicates the resource passed the user's own filter. Our job is to:
When a user shares a link:
1. FETCH → Retrieve and parse the content
2. EVALUATE → Apply importance criteria
3. DECIDE → Storage location and application type
4. STORE → Create structured knowledge entry
5. VALIDATE → Scribe verification (slop scan + doc verify)
6. CONNECT → Link to existing palace structures
7. APPLY → Route to codebase or infrastructure updates
8. PRUNE → Identify displaced/outdated knowledge
All knowledge corpus entries MUST pass scribe validation before finalizing.
Run Skill(scribe:slop-detector) on the new entry:
Run Skill(scribe:doc-verify) to validate:
# Quick validation for knowledge corpus entry
/slop-scan docs/knowledge-corpus/[entry-name].md
/doc-verify docs/knowledge-corpus/[entry-name].md
DO NOT finalize entries with slop score > 2.5 - rewrite with concrete specifics.
Verification: Run the command with --help flag to verify availability.
| Criterion | Weight | Questions |
|---|---|---|
| Novelty | 25% | Does this introduce new patterns or concepts? |
| Applicability | 30% | Can we apply this to current work? |
| Durability | 20% | Will this remain relevant in 6+ months? |
| Connectivity | 15% | Does it connect to multiple existing concepts? |
| Authority | 10% | Is the source credible and well-reasoned? |
Apply when knowledge directly improves current project:
Action: Update code, add comments, create ADR
Apply when knowledge improves our plugin ecosystem:
Action: Update skills, create modules, enhance agents
**Verification:** Run the command with `--help` flag to verify availability.
Is the knowledge...
├── About HOW we build things? → Meta-infrastructure
│ ├── Skill patterns → Update abstract/memory-palace skills
│ ├── Learning methods → Add to knowledge-corpus
│ └── Tool techniques → Create new skill module
│
└── About WHAT we're building? → Local codebase
├── Domain knowledge → Store in project docs
├── Implementation patterns → Update code/architecture
└── Bug/issue solutions → Apply fix, document
Verification: Run the command with --help flag to verify availability.
| Knowledge Type | Location | Format |
|---|---|---|
| Meta-learning patterns | docs/knowledge-corpus/ | Full memory palace entry |
| Skill design insights | skills/*/modules/ | Technique module |
| Tool/library knowledge | docs/references/ | Quick reference |
| Temporary insights | Digital garden seedling | Lightweight note |
"A cluttered palace is a cluttered mind."
New knowledge often displaces old—but time is not the criterion. Relevance and aspirational alignment are.
The human in the loop defines what stays. Before major tidying:
For each piece of knowledge, both must be yes:
| Finding | Action |
|---|---|
| Supersedes | Archive old with gratitude, link as context |
| Contradicts | Evaluate both, keep what sparks joy |
| No longer aligned | Release with gratitude |
| Complements | Create bidirectional links |
"I might need this someday" is fear, not joy. Release it.
"If it can't teach something the existing corpus can't already teach → skip it."
Before storing ANY knowledge, run the marginal value filter to prevent corpus pollution.
1. Redundancy Check
2. Delta Analysis (for partial overlap only)
3. Integration Decision
from memory_palace.corpus import MarginalValueFilter
# Initialize filter with corpus and index directories
filter = MarginalValueFilter(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Evaluate new content
redundancy, delta, integration = filter.evaluate_content(
content=article_text,
title="Structured Concurrency in Python",
tags=["async", "concurrency", "python"]
)
# Get human-readable explanation
explanation = filter.explain_decision(redundancy, delta, integration)
print(explanation)
# Act on decision
if integration.decision == IntegrationDecision.SKIP:
print(f"Skipping: {integration.rationale}")
elif integration.decision == IntegrationDecision.STANDALONE:
# Store as new entry
store_knowledge(content, title)
elif integration.decision == IntegrationDecision.MERGE:
# Enhance existing entry
enhance_entry(integration.target_entries[0], content)
elif integration.decision == IntegrationDecision.REPLACE:
# Replace outdated entry
replace_entry(integration.target_entries[0], content)
Verification: Run the command with --help flag to verify availability.
**Verification:** Run the command with `--help` flag to verify availability.
=== Marginal Value Assessment ===
Redundancy: partial
Overlap: 65%
Matches: async-patterns, python-concurrency
- Partial overlap (65%) with 2 entries
Delta Type: novel_insight
Value Score: 75%
Teaching Delta: Introduces 8 new concepts
Novel aspects:
+ New concepts: structured, taskgroup, context-manager
+ New topics: Error Propagation, Resource Cleanup
Decision: STANDALONE
Confidence: 80%
Rationale: Novel insights justify standalone: Introduces 8 new concepts
Verification: Run the command with --help flag to verify availability.
The marginal value filter respects autonomy levels (see plan Phase 4):
Current implementation: Level 0 (all human-in-the-loop).
The knowledge corpus uses reinforcement learning signals to dynamically score entry quality based on actual usage patterns.
| Signal | Weight | Description |
|---|---|---|
ACCESS | +0.1 | Entry was accessed/read |
CITATION | +0.3 | Entry was cited in another context |
POSITIVE_FEEDBACK | +0.5 | User marked as helpful |
NEGATIVE_FEEDBACK | -0.3 | User marked as unhelpful |
CORRECTION | +0.2 | Entry was corrected/updated |
STALE_FLAG | -0.4 | Entry marked as potentially outdated |
Knowledge entries decay over time unless validated:
| Maturity | Half-Life | Decay Curve |
|---|---|---|
| Seedling | 14 days | Exponential |
| Growing | 30 days | Exponential |
| Evergreen | 90 days | Logarithmic |
Entries are classified by decay status:
Hybrid lineage tracking based on source importance:
Full Lineage (for important sources):
Simple Lineage (for standard sources):
Full lineage is used for:
The KnowledgeOrchestrator coordinates all quality systems:
from memory_palace.corpus import KnowledgeOrchestrator, UsageSignal
# Initialize orchestrator
orchestrator = KnowledgeOrchestrator(
corpus_dir="docs/knowledge-corpus",
index_dir="docs/knowledge-corpus/indexes"
)
# Record usage events
orchestrator.record_usage("entry-1", UsageSignal.ACCESS)
orchestrator.record_usage("entry-1", UsageSignal.POSITIVE_FEEDBACK)
# Assess entry quality
entry = {"id": "entry-1", "maturity": "growing"}
assessment = orchestrator.assess_entry(entry)
print(f"Quality: {assessment.overall_score:.0%}")
print(f"Status: {assessment.status}")
print(f"Recommendations: {assessment.recommendations}")
# Get maintenance queue
entries = [...] # Your entry list
queue = orchestrator.get_maintenance_queue(entries)
for item in queue:
print(f"{item.entry_id}: {item.status} - {item.recommendations}")
# Ingest new content with lineage
from memory_palace.corpus import SourceReference, SourceType
source = SourceReference(
source_id="src-1",
source_type=SourceType.DOCUMENTATION,
url="https://docs.example.com/api",
title="API Documentation"
)
entry_id, decision = orchestrator.ingest_with_lineage(
content="# API Reference\n...",
title="API Documentation",
source=source
)
Verification: Run the command with --help flag to verify availability.
The marginal value filter emits RL signals on integration decisions:
from memory_palace.corpus import MarginalValueFilter
filter = MarginalValueFilter(corpus_dir, index_dir)
# Evaluate with RL signal emission
redundancy, delta, integration, rl_signal = filter.evaluate_with_rl(
content=article_text,
title="New Article",
tags=["python", "async"]
)
# RL signal contains:
# - signal_type: UsageSignal to emit
# - weight: Signal weight for scoring
# - action: What happened (new_entry_created, entry_enhanced, etc.)
# - decision: Integration decision made
# - confidence: Decision confidence
print(f"RL Signal: {rl_signal['action']} (weight: {rl_signal['weight']})")
Verification: Run the command with --help flag to verify availability.
User shares: "Check out this article on structured concurrency"
intake:
source: "https://example.com/structured-concurrency"
# PHASE 3: Marginal Value Filter
marginal_value:
redundancy:
level: partial_overlap
overlap_score: 0.65
matching_entries: [async-patterns, python-concurrency]
delta:
type: novel_insight
value_score: 0.75
novel_aspects: [structured, taskgroup, context-manager]
teaching_delta: "Introduces structured concurrency pattern"
integration:
decision: standalone
confidence: 0.80
rationale: "Novel insights justify standalone entry"
# Continue with evaluation if filter passes
evaluation:
novelty: 75 # New pattern for error handling
applicability: 90 # Directly relevant to async code
durability: 85 # Core concept, won't age quickly
connectivity: 70 # Links to error handling, async patterns
authority: 80 # Well-known author, cited sources
total: 82 # Evergreen, store and apply
routing:
type: both
local_application:
- Refactor async error handling in current project
- Add structured concurrency pattern to codebase
meta_application:
- Create module in relevant skill
- Add to knowledge-corpus as reference
storage:
location: docs/knowledge-corpus/structured-concurrency.md
format: memory_palace_entry
maturity: growing
pruning:
displaces:
- Old async error patterns (mark deprecated)
complements:
- Existing error handling module
- Async patterns documentation
Verification: Run the command with --help flag to verify availability.
Research sessions and external content are automatically queued for review in docs/knowledge-corpus/queue/.
# List pending queue entries
ls -1t docs/knowledge-corpus/queue/*.yaml
# Review specific entry
cat docs/knowledge-corpus/queue/2025-12-31_topic.yaml
# Process approved entry
# 1. Create memory palace entry in docs/knowledge-corpus/
# 2. Update queue entry status to 'processed'
# 3. Archive or delete queue entry
Verification: Run the command with --help flag to verify availability.
The research-queue-integration hook automatically queues:
Queue entry format: See docs/knowledge-corpus/queue/README.md
**Verification:** Run the command with `--help` flag to verify availability.
pending_review → [Review] → approved/rejected
approved → [Create Entry] → processed
processed → [Archive] → queue/archive/
Verification: Run the command with --help flag to verify availability.
uv run python skills/knowledge-intake/scripts/intake_cli.py --candidate path/to/intake_candidate.json --auto-acceptdocs/knowledge-corpus/*.md),
developer drafts (docs/developer-drafts/), and appends audit rows to docs/curation-log.md.--output-root in tests or sandboxes to avoid mutating the main corpus.--process-queue flag to review and process queued entries interactively.modules/evaluation-rubric.mdmodules/storage-patterns.mdmodules/konmari-tidying.mdmodules/pruning-workflows.mdmodules/discussion-promotion.mdMemory-palace hooks automatically detect content that may need knowledge intake processing:
| Hook | Event | When Triggered |
|---|---|---|
url_detector | UserPromptSubmit | User message contains URLs |
web_content_processor | PostToolUse (WebFetch/WebSearch) | After fetching web content |
local_doc_processor | PostToolUse (Read) | Reading files in knowledge paths |
research_queue_integration | SessionEnd | Research sessions with 3+ WebSearch calls |
When hooks detect potential knowledge content, they add context messages:
**Verification:** Run `pytest -v` to verify tests pass.
Memory Palace: New web content fetched from {url}.
Consider running knowledge-intake to evaluate and store if valuable.
Verification: Run the command with --help flag to verify availability.
**Verification:** Run the command with `--help` flag to verify availability.
Memory Palace: Reading local knowledge doc '{path}'.
This path is configured for knowledge tracking.
Consider running knowledge-intake if this contains valuable reference material.
Verification: Run the command with --help flag to verify availability.
Hooks check the memory-palace-index.yaml to avoid redundant processing:
Before signaling intake, hooks validate content:
The deduplication index stores fields aligned with this skill's evaluation:
entries:
"https://example.com/article":
content_hash: "xxh:abc123..."
stored_at: "docs/knowledge-corpus/article.md"
importance_score: 82 # From evaluation framework
maturity: "growing" # seedling, growing, evergreen
routing_type: "both" # local, meta, both
last_updated: "2025-12-06T..."
Verification: Run the command with --help flag to verify availability.
memory-palace-architect - Structures stored knowledge spatiallydigital-garden-cultivator - Manages knowledge lifecycleknowledge-locator - Finds and retrieves stored knowledgeskills-eval (abstract) - Evaluates meta-infrastructure updatesCommand not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.