From arscontexta
Delivers research-backed architecture recommendations for knowledge systems grounded in TFT research, with rationales for decisions based on use case, constraints, and goals.
npx claudepluginhub agenticnotetaking/arscontexta --plugin arscontextaThis skill is limited to using the following tools:
Read these files to configure recommendation behavior:
Queries bundled research knowledge graph for knowledge systems methodology guidance via 3-tier KB (WHY claims, HOW docs, examples) with cited, practical answers.
Conducts systematic topic research with complexity scoring (4-10), thinking mode selection (Standard/Think/Ultrathink), and generates structured markdown documentation for knowledge capture.
Challenges assumptions, applies mental models like SWOT, first principles, and inversion, and structures reasoning to sharpen decisions and solve complex problems.
Share bugs, ideas, or general feedback.
Read these files to configure recommendation behavior:
${CLAUDE_PLUGIN_ROOT}/reference/tradition-presets.md — tradition and use-case presets
${CLAUDE_PLUGIN_ROOT}/reference/methodology.md — universal methodology principles
${CLAUDE_PLUGIN_ROOT}/reference/components.md — component blueprints (what can be toggled)
${CLAUDE_PLUGIN_ROOT}/reference/dimension-claim-map.md — maps each dimension position to supporting research claims
${CLAUDE_PLUGIN_ROOT}/reference/interaction-constraints.md — hard blocks, soft warns, cascade effects between dimensions
${CLAUDE_PLUGIN_ROOT}/reference/claim-map.md — topic navigation for the research graph
If any reference file is missing, note the gap but continue with available information. The recommendation degrades gracefully — fewer citations, same structure.
Target: $ARGUMENTS
Parse immediately:
--compare [A] [B]: enter comparison mode — compare two presets or configurationsSTART NOW. Reference below defines the workflow.
Advisory, not generative.
/recommend exists for exploration. The user is considering a knowledge system — maybe they have a use case, maybe they're comparing approaches, maybe they're curious what the research says about a specific pattern. /recommend answers with specific, research-backed reasoning without creating any files.
This is the entry point before commitment. /setup generates a full system. /recommend sketches what that system would look like and WHY, so the user can decide whether to proceed. Every recommendation traces to specific research claims. "I recommend X" is never enough — "I recommend X because [[claim]]" is the minimum.
The relationship to other skills:
/recommend is the only one that works without an existing system. It's pure reasoning from research.
Extract signals from the user's description. Every word is a signal:
| Signal Category | Examples | Maps To |
|---|---|---|
| Domain | "therapy sessions", "research papers", "trading journal" | Closest preset, schema design |
| Scale | "just starting", "hundreds of notes", "massive corpus" | Granularity, navigation tiers |
| Processing style | "quick capture", "deep analysis", "both" | Processing depth, automation level |
| Platform | "Obsidian", "Claude Code", "plain files" | Platform capabilities, linking type |
| Existing system | "I use PARA", "I have a Zettelkasten", "starting fresh" | Tradition preset baseline |
| Pain points | "can't find anything", "too much ceremony", "notes go stale" | Dimension adjustments |
| Goals | "track claims", "build arguments", "personal reflection" | Note design, schema density |
| Operator | "I'll maintain it", "AI agent runs it", "both" | Automation, maintenance frequency |
If the user's description lacks critical signals, ask at most 2 clarifying questions. Frame them as choices, not open-ended:
To recommend the right architecture, I need two things:
1. **What kind of knowledge?** (pick closest)
- Research/learning — tracking claims, building arguments
- Creative — drafts, revisions, inspiration
- Operational — tasks, decisions, processes
- Personal — reflections, goals, relationships
- Mixed — multiple of the above
2. **Who operates it?**
- Mostly you (human-maintained)
- Mostly an AI agent
- Both (shared operation)
Do NOT ask more than 2 questions. The recommendation can always be refined. Get enough to start, then recommend.
If after parsing (and optional questions) you still lack critical information, make reasonable defaults and STATE them:
Assuming:
- Platform: Obsidian (most common for personal knowledge)
- Scale: moderate (50-200 notes in first year)
- Operator: human-primary with occasional AI assistance
These assumptions affect the recommendation. Correct any that don't match.
Read ${CLAUDE_PLUGIN_ROOT}/reference/tradition-presets.md. This file contains:
Score each preset against the user's signals:
| Criterion | Weight | How to Score |
|---|---|---|
| Domain match | High | Does the preset's intended domain match? |
| Processing style match | High | Does the preset's processing depth match the user's style? |
| Scale match | Medium | Is the preset designed for the user's expected scale? |
| Pain point coverage | Medium | Does the preset address the user's stated friction? |
| Goal alignment | High | Does the preset optimize for what the user wants? |
State the closest preset and explain the match:
Closest preset: [preset name]
Match quality: [strong/moderate/partial]
Why: [1-2 sentences explaining the match]
Adjustments needed: [what needs to change from the preset baseline]
If the user's description blends multiple presets, explain the blend:
This blends two presets:
- [Preset A] for [which aspects]
- [Preset B] for [which aspects]
Starting from [Preset A] and adjusting [specific dimensions] toward [Preset B].
Use ${CLAUDE_PLUGIN_ROOT}/reference/claim-map.md to identify which research topics apply to the user's use case. Read the claim map and identify relevant topic clusters.
Use qmd tools to find research claims that apply to the user's specific constraints:
mcp__qmd__deep_search query="[user's domain] knowledge management patterns"
mcp__qmd__vector_search query="[user's specific concern or goal]"
Fallback chain:
mcp__qmd__deep_search, mcp__qmd__vector_search, mcp__qmd__search)qmd query, qmd vsearch, qmd search)Run 2-4 targeted searches based on the user's signals. Focus on:
For each search result that looks relevant, read the claim via mcp__qmd__get (or mcp__qmd__multi_get for batch reads) to understand the full argument. You need enough depth to cite with confidence.
Collect 5-15 relevant claims. You won't cite all of them — but you need a pool to draw from.
Read ${CLAUDE_PLUGIN_ROOT}/reference/dimension-claim-map.md. This maps each dimension position to the research claims that support it.
For each of the 8 configuration dimensions, determine the recommended position:
Granularity — atomic / moderate / compound
Organization — flat / hierarchical
Linking — explicit / explicit+implicit
Processing — heavy / moderate / light
Navigation — 2-tier / 3-tier / 4-tier
Maintenance — condition-based (tight) / condition-based (lax) / manual
Schema — minimal / moderate / dense
Automation — full / convention / manual
For each dimension, assign confidence based on signal strength:
| Confidence | Meaning | When |
|---|---|---|
| High | Strong signals point clearly to this position | Multiple signals converge, research supports, no counter-signals |
| Medium | Reasonable recommendation with some uncertainty | Some signals present, research supports, minor alternatives exist |
| Low | Best guess given limited information | Few signals, multiple valid positions, user should validate |
Read ${CLAUDE_PLUGIN_ROOT}/reference/interaction-constraints.md.
Test the proposed 8-dimension configuration against all hard block rules. Hard blocks are combinations that WILL fail:
Example hard block:
granularity: atomic + navigation: 2-tier
At high volume (200+ notes), atomic granularity with only 2 tiers
creates navigational vertigo — too many notes per MOC.
If a hard block fires:
Test against soft warn rules. Soft warns are friction points that have compensating mechanisms:
Example soft warn:
schema: dense + automation: manual
Dense schemas require manual enforcement without automation.
Compensating: add validation scripts triggered by condition checks.
If soft warns fire:
Would changing one dimension to match the user's needs create pressure on another dimension that wasn't explicitly discussed?
Example cascade:
User wants: processing: heavy
Cascade pressure: maintenance should be condition-based (heavy processing
generates more artifacts that need maintenance)
If cascades detected, include the cascaded dimension in the recommendation with explanation.
--=={ recommend }==--
Use case: [1-sentence summary of what the user described]
Closest Preset: [preset name] ([strong/moderate/partial] match)
[Why this preset, what adjustments needed]
## Recommended Configuration
| Dimension | Position | Confidence | Rationale |
|-----------|----------|------------|-----------|
| Granularity | [position] | [H/M/L] | [reason + claim reference] |
| Organization | [position] | [H/M/L] | [reason + claim reference] |
| Linking | [position] | [H/M/L] | [reason + claim reference] |
| Processing | [position] | [H/M/L] | [reason + claim reference] |
| Navigation | [position] | [H/M/L] | [reason + claim reference] |
| Maintenance | [position] | [H/M/L] | [reason + claim reference] |
| Schema | [position] | [H/M/L] | [reason + claim reference] |
| Automation | [position] | [H/M/L] | [reason + claim reference] |
{If interaction constraints fired:}
Interaction Constraints:
[HARD BLOCK | SOFT WARN | CLEAN]: [description]
[Resolution or compensating mechanism]
## Architecture Sketch
Folder structure:
[proposed folder layout for their domain]
Components enabled:
- [component] — [why, one line]
- [component] — [why, one line]
Components skipped:
- [component] — [why not needed]
## Schema Design
Base fields (all {vocabulary.note_plural}):
description: [one-sentence summary]
[domain-specific field]: [purpose]
Example {vocabulary.note} in your domain:
---
description: [example for their domain]
[field]: [example value]
---
# [example title in their domain style]
[2-3 lines showing what a note looks like]
## Processing Pattern
Capture: [how things enter the system]
Process: [how raw input becomes structured knowledge]
Connect: [how notes link to each other]
Maintain: [how the system stays healthy]
## Session Rhythm
Orient: [what to read at session start]
Work: [how to capture during active work]
Persist: [what to save at session end]
## Trade-offs
Optimizes for: [what this configuration prioritizes]
Sacrifices: [what it deprioritizes]
Reconsider when: [signals that the configuration should evolve]
## Research Backing
Key claims supporting this recommendation:
- [claim title] — [how it applies to this recommendation]
- [claim title] — [how it applies]
- [claim title] — [how it applies]
Ready to build this? Run /setup to generate the full system.
Each dimension rationale should include:
Example:
Granularity → atomic (high confidence):
Signal: "track claims across disciplines" requires decomposing sources
into individual assertions.
Research: [[three capture schools converge through agent-mediated synthesis]]
shows that agent processing recovers what atomic capture loses.
Alternative: Moderate granularity would reduce processing effort but
sacrifice cross-domain connection density, which is your primary goal.
When invoked with --compare [A] [B]:
${CLAUDE_PLUGIN_ROOT}/reference/tradition-presets.md--=={ recommend: compare }==--
Comparing: [Preset A] vs [Preset B]
| Dimension | [A] | [B] | Key Difference |
|-----------|-----|-----|----------------|
| Granularity | [pos] | [pos] | [what differs and why] |
| Organization | [pos] | [pos] | [what differs and why] |
| ... | ... | ... | ... |
Where [A] wins:
- [scenario where A is better, with research reason]
Where [B] wins:
- [scenario where B is better, with research reason]
For your use case: [recommendation of which to start from, if user
provided enough context]
Trace to research — no "I recommend X" without a claim reference. If no research covers a specific aspect, say so: "No specific research covers this; recommending based on general principles."
Be honest about confidence — low-confidence recommendations are valuable IF flagged. "I'm less certain about navigation tiers because your scale is unclear" is better than a false-confident recommendation.
Respect simplicity — default to simpler configurations. The user can always add complexity later. Recommend the MINIMUM viable configuration, then note what they'd add as they grow.
Avoid over-engineering — don't recommend features the user didn't ask about. If they want a simple journal, don't recommend 4-tier navigation and dense schemas. Match the system to their actual needs.
Present trade-offs — every position has costs. Make them visible so the user can make an informed choice.
If the user already has a system and wants advice on improving it:
You already have a system. /recommend designs new systems from scratch.
For evolution advice on existing systems, run /architect — it reads your
current configuration and recommends evidence-based changes.
If you want to compare your current setup against the research-optimal
configuration, I can do that here. Want me to?
If they say yes, proceed with comparison: their current configuration vs what research suggests.
If the request is not about a knowledge system (e.g., "recommend a database for my app"):
/recommend is designed for knowledge system architecture — personal or
agent-operated systems for capturing, organizing, and retrieving knowledge.
Your request sounds more like [what it sounds like]. I can help with
that directly, but /recommend's research backing is specific to knowledge
management patterns.
If no preset is a reasonable match:
No existing preset closely matches your use case. Building a custom
configuration from first principles.
Starting from: [methodology.md universal principles]
Domain signals: [what was extracted]
Proceed with Phase 4 (dimension mapping) without a preset baseline. Note lower confidence on dimensions where a preset would have provided grounding.
When signals point in opposite directions (e.g., "I want deep analysis but hate ceremony"):
Tension detected: You want deep processing (heavy) but minimal ceremony
(light schema, manual automation). Research shows these create friction
because deep processing generates metadata that light schemas can't capture.
Resolution: Processing → moderate, Schema → moderate
This gives meaningful analysis without overwhelming ceremony.
You can increase processing depth later if you want deeper extraction.
There is no universal best. Redirect to constraints:
"Best" depends on what you're optimizing for. A research system and
a personal journal need fundamentally different architectures.
Tell me:
- What kind of knowledge are you working with?
- What's your biggest frustration with your current approach?
That gives me enough to recommend something specific.