From arscontexta
Queries bundled research knowledge graph for knowledge systems methodology guidance via 3-tier KB (WHY claims, HOW docs, examples) with cited, practical answers.
npx claudepluginhub agenticnotetaking/arscontexta --plugin arscontextaThis skill is limited to using the following tools:
**Question: $ARGUMENTS**
Delivers research-backed architecture recommendations for knowledge systems grounded in TFT research, with rationales for decisions based on use case, constraints, and goals.
Builds and maintains persistent Obsidian wiki vaults using AI for source ingestion, knowledge querying, note linting, and autonomous research.
Share bugs, ideas, or general feedback.
Question: $ARGUMENTS
If no question provided, ask the user what they want to know.
Execute these steps:
mcp__qmd__multi_get when reading multiple IDs)ops/derivation.md if the question involves their specific systemSTART NOW. Reference below explains routing and synthesis methodology.
The plugin's knowledge base has three distinct parts, each serving a different function. Effective answers often draw from multiple tiers.
Location: ${CLAUDE_PLUGIN_ROOT}/methodology/ — filter by kind: research
Content: 213 interconnected research claims grounded in cognitive science, knowledge system theory, and agent cognition research.
Use for: Questions about principles, trade-offs, why things work, theoretical foundations.
What it contains:
Search strategy: Use mcp__qmd__deep_search (highest quality, LLM-reranked) for conceptual questions. Use mcp__qmd__vector_search for semantic exploration. Use mcp__qmd__search for known terminology. All searches use the methodology collection.
Location: ${CLAUDE_PLUGIN_ROOT}/methodology/ — filter by kind: guidance
Content: 9 operational documents covering procedures, workflows, and implementation rationale.
Use for: Questions about how to do things, operational best practices, workflow mechanics.
Documents include:
Search strategy: mcp__qmd__search with keywords from the question using the methodology collection. To narrow to guidance docs, add kind:guidance to your grep filter on results.
Location: ${CLAUDE_PLUGIN_ROOT}/methodology/ — filter by kind: example
Content: 12 domain-specific compositions showing what generated vaults look like in practice.
Use for: Questions about how to apply methodology to specific domains, inspiration for novel domain mapping.
Examples include domains like:
Search strategy: Use mcp__qmd__vector_search across the methodology collection for semantic domain matching. To list all examples: rg '^kind: example' ${CLAUDE_PLUGIN_ROOT}/methodology/.
Location: ${CLAUDE_PLUGIN_ROOT}/reference/
Content: Structured reference documents supporting derivation and system architecture.
Use for: Deep dives into specific architectural topics, cross-referencing dimension positions, understanding interaction constraints.
Core Architecture:
methodology.md — universal principles and processing pipelinecomponents.md — component blueprints and feature blockskernel.yaml — the 12 non-negotiable primitivesthree-spaces.md — self/notes/ops architecture and boundary rulesConfiguration & Derivation:
dimension-claim-map.md — which research claims inform which dimensionsinteraction-constraints.md — how dimension choices create pressure on otherstradition-presets.md — named points in configuration spacevocabulary-transforms.md — universal-to-domain term mappingderivation-validation.md — validation tests for derived systemsBehavioral & Quality:
personality-layer.md — personality derivation and encodingconversation-patterns.md — worked examples of full derivation pathsfailure-modes.md — how knowledge systems die and prevention patternsLifecycle & Operations:
use-case-presets.md — preset configurations for common domainssession-lifecycle.md — session rhythm, context budget, orient-work-persistevolution-lifecycle.md — seed-evolve-reseed, condition-based maintenanceself-space.md — agent identity generation, self/ architecturesemantic-vs-keyword.md — search modality selection guidanceopen-questions.md — unresolved research questions and deferred itemsAlso read: claim-map.md — the routing index showing which claims map to which topics. Start here when you need to find relevant claims quickly.
Before searching, classify the user's question to determine which tier(s) to consult.
| Question Type | Signals | Primary Tier | Secondary Tier |
|---|---|---|---|
| WHY | "why does...", "what's the reasoning...", "what's the theory behind...", "why not just..." | Research Graph | Guidance Docs |
| HOW | "how do I...", "what's the workflow for...", "how should I...", "what's the process..." | Guidance Docs | Research Graph |
| WHAT | "what does X look like...", "show me an example...", "how would this work for...", "what would a Y vault..." | Domain Examples | Guidance Docs |
| COMPARE | "X vs Y", "what's the difference between...", "should I use X or Y...", "trade-offs between..." | Research Graph | Examples |
| DIAGNOSE | "something feels wrong...", "why isn't this working...", "my system is doing X when it should..." | Guidance Docs + Reference (failure-modes.md) | Research Graph |
| CONFIGURE | "what dimension...", "how should I set...", "what configuration for...", "which preset..." | Reference (dimensions, constraints) | Research Graph |
| EVOLVE | "should I change...", "my system has grown...", "this doesn't fit anymore..." | Reference (evolution-lifecycle.md) | Guidance Docs |
Many questions require consulting multiple tiers. The classification above shows primary and secondary tiers. Always check: would the answer be stronger with evidence from another tier?
Example multi-tier routing:
Question: "Why does my system use atomic notes instead of longer documents?"
dimension-claim-map.md for the granularity dimension's informing claimsops/derivation.md for their specific granularity position and reasoningQuestion: "How should I handle therapy session notes that are very long?"
Read ${CLAUDE_PLUGIN_ROOT}/reference/claim-map.md first. This is the routing index — it shows which topic areas are relevant to the user's question and which claims to start with. Do NOT skip this step and search blindly.
For WHY questions (Research Graph):
mcp__qmd__deep_search query="[user's question rephrased as a search]" collection="methodology" limit=10
Use mcp__qmd__deep_search (hybrid + LLM reranking) for conceptual questions because the best connections often use different vocabulary than the question. Results will include all kinds; prioritize kind: research results.
For HOW questions (Guidance Docs):
mcp__qmd__search query="[key terms from question]" collection="methodology" limit=5
Use keyword search first since guidance docs use consistent terminology. Fall back to semantic if keyword misses. Prioritize kind: guidance results.
For WHAT questions (Domain Examples):
mcp__qmd__vector_search query="[domain + what the user wants to see]" collection="methodology" limit=5
Use semantic search to find the most relevant domain examples even if the exact domain name differs. Prioritize kind: example results.
Fallback chain for qmd lookups:
mcp__qmd__deep_search, mcp__qmd__vector_search, mcp__qmd__search)qmd query, qmd vsearch, qmd search)${CLAUDE_PLUGIN_ROOT}/methodology/ and ${CLAUDE_PLUGIN_ROOT}/reference/For Reference documents: Read specific reference documents based on the topic. The claim-map will indicate which reference docs are relevant. Load the 2-4 most relevant — not all of them.
Do NOT skim search results. For the top 3-7 results:
The depth principle: A shallow answer citing 10 claims is worse than a deep answer weaving 4 claims into a coherent argument. Read fewer sources more deeply.
If the question involves the user's specific system:
ops/derivation.md contains their dimension positions, vocabulary, constraints, and the reasoning behind every configuration choice${CLAUDE_PLUGIN_ROOT}/reference/vocabulary-transforms.md to translate universal terms into their domain language. Answer about "reflections" not "claims" if they are running a therapy system${CLAUDE_PLUGIN_ROOT}/reference/interaction-constraints.md to see if their configuration creates specific pressures relevant to the question${CLAUDE_PLUGIN_ROOT}/reference/dimension-claim-map.md to ground answers in the specific claims that inform their configurationRead ops/methodology/ for system-specific self-knowledge. Methodology notes may be more current than the derivation document — they capture ongoing operational learnings that the original derivation did not anticipate.
# Load all methodology notes
for f in ops/methodology/*.md; do
echo "=== $f ==="
cat "$f"
echo ""
done
When methodology notes address the user's question:
When to prioritize methodology over research:
When to prioritize research over methodology:
Every answer follows this structure:
Direct answer — Lead with the answer, not the search process. Do not say "I searched for X and found Y." Say what the answer IS.
Research backing — What specific claims support this answer. Cite by title: "According to [[claim title]]..."
Practical implications — What this means for the user's specific situation. Use their domain vocabulary if available from derivation.
Tensions or caveats — Any unresolved conflicts, limitations, or situations where the answer might not hold. The research has genuine tensions — share them honestly.
Further exploration — Related claims or topics the user might want to explore. These are departure points, not assignments.
Sources consulted — Briefly note which knowledge layers were used: "Research: [N] claims consulted. Local methodology: [M] notes consulted." When local methodology was relevant, name the specific note: "Your methodology note [[title]] informed the [specific part] of this answer."
Ground answers in specific claims, not general knowledge. The knowledge base exists so answers are evidence-based. If you answer from general knowledge without consulting the graph, you are bypassing the tool.
Acknowledge gaps honestly. When the research does not cover something, say so. "The current research graph doesn't have claims about X" is a legitimate answer. Do not fabricate coverage.
Distinguish certainty levels. Some claims are well-established with multiple supporting claims. Others are preliminary observations or research directions. The confidence field in claim frontmatter signals this:
confidence: speculative — early-stage observation, not fully evaluatedconfidence: emerging — promising but needs more supportconfidence: supported — well-evidenced claimconfidence: established — foundational, widely supportedstatus: archived — superseded or dissolved claimBe opinionated. The research has positions. Share them. "The research strongly suggests X because..." is better than "some sources say X, others say Y." If there IS genuine disagreement, present it as a tension, not as false balance.
Translate to user context. When the user has a derivation, apply findings to their system. Generic advice is less useful than specific application. "In your therapy vault, this means..." is better than "in general, this means..."
Question: "Why does my system use atomic notes instead of longer documents?"
Classification: WHY -> Primary: Research Graph. Secondary: Reference (dimension-claim-map).
Search:
mcp__qmd__deep_search query="atomic notes vs compound documents granularity" collection="methodology" limit=8reference/dimension-claim-map.md — find granularity dimension's informing claimsops/derivation.md — check user's granularity positionAnswer:
Your system uses atomic granularity because your conversation signaled "precise claims from papers." The research shows that atomic notes enable independent linking and recombination — according to [[atomic notes maximize recombinable surface area]], each note can be linked from any context without dragging unrelated content along. Your processing pipeline (extract -> reflect -> reweave -> verify) specifically requires atomic granularity to maintain the link fabric, because [[verify phase checks link density per note]] and compound documents would inflate link counts artificially.
The trade-off: atomic notes create more files and require denser navigation structures. [[flat organization requires semantic search at scale]] explains why your system includes semantic search — without it, finding notes in a flat atomic structure becomes impractical beyond ~50 notes.
Tension: [[composability and context compete at the note level]] — making notes small enough to link cleanly sometimes makes them too small to carry their own argument. Your system handles this through the description field and topic map context phrases.
Question: "How do I process a large research paper?"
Classification: HOW -> Primary: Guidance Docs. Secondary: Research Graph.
Search:
mcp__qmd__search query="large source processing chunking" collection="methodology" limit=5mcp__qmd__deep_search query="context degradation large documents extraction" collection="methodology" limit=5Answer:
For sources over 2500 lines, chunk into segments of 350-1200 lines and process each chunk with fresh context. [Guidance: pipeline processing workflow] explains the chunking strategy in detail.
The reason is [[context quality degrades as conversation grows]] — your extraction quality in the later sections of a 3000-line document will be measurably worse than in the early sections. Fresh context per chunk ensures each section gets your best attention.
Practical steps:
- Run /seed to create the extraction task
- /reduce will automatically detect the source size and plan chunks
- Each chunk extracts independently with a running duplicate list across chunks
- The final extraction report covers all chunks combined
Your config has
processing.depth: standard, which means chunks process sequentially in the current session. If you switch todeep, each chunk would get its own isolated context (maximum quality, but slower).
Question: "What would a cooking recipe vault look like?"
Classification: WHAT -> Primary: Domain Examples. Secondary: Guidance Docs.
Search:
mcp__qmd__vector_search query="cooking recipes culinary knowledge system" collection="methodology" limit=5Answer:
[Answer synthesized from examples, showing concrete folder structure, note examples, topic map examples, and vocabulary choices specific to a culinary domain. References the closest existing domain example for structural patterns.]
| Anti-Pattern | Why It Fails | Instead |
|---|---|---|
| Answer without searching | Bypasses the knowledge base entirely | Always search, even for "obvious" questions |
| List claims without synthesis | Dumps search results, forces user to connect dots | Weave claims into a coherent argument |
| Search only one tier | Misses HOW when answering WHY, or vice versa | Check if secondary tier strengthens the answer |
| Ignore user's derivation | Generic advice when specific is available | Read ops/derivation.md for user context |
| Use universal vocabulary | Says "notes" when their system says "reflections" | Apply vocabulary transforms |
| Fabricate claim citations | Cites claims that do not exist in the graph | Only cite claims you actually read |
| Skip the claim-map | Searches blindly without routing | Read claim-map first for topic orientation |
| Present false balance | "Some say X, others say Y" when research has a clear position | Be opinionated — share the research's position |
If the knowledge base genuinely does not cover a topic:
Do NOT extrapolate wildly from tangentially related claims. An honest "I don't know, but here's what's adjacent" is more valuable than a fabricated answer.
When the user's question involves their specific system (not abstract methodology):
Check for ops/derivation.md to understand:
Use ${CLAUDE_PLUGIN_ROOT}/reference/vocabulary-transforms.md to translate universal terms into their domain language. This is not cosmetic — it is about making the answer native to their system.
| Universal Term | Therapy Domain | PM Domain | Research Domain |
|---|---|---|---|
| notes | reflections | decisions | claims |
| topic map | theme map | project map | MOC |
| reduce | surface | extract | reduce |
| reflect | connect | link | reflect |
| inbox | journal | intake | inbox |
Reference ${CLAUDE_PLUGIN_ROOT}/reference/interaction-constraints.md to understand whether their configuration creates specific pressures relevant to the question. Some dimension combinations create tensions that affect the answer:
Use ${CLAUDE_PLUGIN_ROOT}/reference/dimension-claim-map.md to ground answers in the specific claims that inform their configuration choices. This makes the answer traceable: "Your system does X because claim Y supports it for your configuration."