Auto-discovered marketplace from smcady/cairn
npx claudepluginhub smcady/cairnReasoning memory for AI agents. Tracks what was decided, contradicted, and left open across sessions.
No description available.
Production-ready workflow orchestration with 75 focused plugins, 182 specialized agents, and 147 skills - optimized for granular installation and minimal token usage
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Your AI knows what you said. It doesn't know what you decided.
AI memory stores content -- what was discussed, what was mentioned, what came up. None of it stores cognitive structure: which positions are settled, which were rejected and why, which questions are still open. Ask an AI about something you resolved three sessions ago and it surfaces everything -- proposal and rejection, old draft and final decision -- with equal weight and no sense of which is current.
Cairn maintains a typed reasoning graph -- propositions, contradictions, refinements, syntheses, tensions -- with confidence scores and lifecycle status. An LLM with access to this graph knows the state of your thinking, not just a flat log of things you said.
git clone https://github.com/smcady/Cairn.git cairn && cd cairn
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
Create .env.local with your API key:
ANTHROPIC_API_KEY=sk-ant-...
Then, from any project you want to use Cairn in:
cairn init
That's it. Cairn uses local embeddings (fastembed) by default, so no additional API keys are needed. For higher-quality embeddings, optionally add a Voyage AI key:
VOYAGE_API_KEY=pa-... # optional: auto-detected, upgrades embedding quality
Cairn has two integration surfaces. Choose the one that matches how you work.
If you use Claude Code as your AI development tool, cairn integrates via hooks and an MCP server. No application code required.
From your project directory:
cairn init
This configures everything in one step:
.claude/settings.json -- Stop hook (captures conversations) + Orient hook (injects prior reasoning before each response).mcp.json -- MCP server exposing graph query tools./cairn.dbAll paths are resolved automatically. Restart your Claude Code session after running cairn init.
See .mcp.json.example for the MCP server template. Hooks go in .claude/settings.json:
{
"hooks": {
"Stop": [{"matcher": "", "hooks": [{"type": "command",
"command": "CAIRN_DB=\"/path/to/db\" /path/to/cairn/.venv/bin/python /path/to/cairn/scripts/hook_ingest.py"}]}],
"UserPromptSubmit": [{"matcher": "", "hooks": [{"type": "command",
"command": "CAIRN_DB=\"/path/to/db\" /path/to/cairn/.venv/bin/python /path/to/cairn/scripts/hook_orient.py",
"timeout": 10000}]}]
}
}
Replace all /path/to/ values with absolute paths.
| Tool | When to use it |
|---|---|
harness_orient(topic) | Before answering on any topic discussed in prior sessions |
harness_query('decision_log') | "What did we decide about X?" |
harness_query('current_state') | "Where do things stand overall?" |
harness_query('disagreement_map') | "What's still unresolved?" |
harness_search(query) | Find specific nodes before re-opening a discussion |
harness_status | Quick graph overview at session start |
harness_trace(node_id) | "How did we arrive at this position?" |
For a concrete example of what Cairn captures from a 4-turn pricing conversation, including classifier output, confidence changes, and tool responses, see docs/walkthrough.md.
After a few conversations, harness_status returns the current state of your reasoning graph:
## Graph Stats
total_nodes: 42
total_edges: 38
active: 35
resolved: 5
propositions: 28
questions: 8
tensions: 2
## Active Propositions
- [a1b2c3d4e5f6] (confidence: 0.8, support: 2, challenges: 0)
Ship with project-level .mcp.json as the default configuration
- [b2c3d4e5f6a1] (confidence: 0.7, support: 1, challenges: 1)
SQLite is sufficient for single-user deployment
## Open Questions
- [c3d4e5f6a1b2] Does the classifier correctly handle purely operational
exchanges (no reasoning content) by producing zero events?
## Syntheses
- [d4e5f6a1b2c3] The event log is the truth; the graph is a derived view.
Any node's current status is computed from the full chain of events
that touch it.
If you're building an agent or application with the Anthropic SDK, cairn integrates as a library. You control the capture and retrieval loop.
Capture -- one import change. Every messages.create() and messages.stream() call auto-ingests into the graph as a background task.
# Before
from anthropic import AsyncAnthropic
# After
from cairn.integrations.anthropic import AsyncAnthropic