LLM Wiki
You've been here before:
- New session. AI remembers nothing. You spend 20 minutes re-explaining.
- "What did we decide about the pricing formula?" Buried in a 3-hour chat log.
- Five docs in Notion, three in Google Drive, two Excel files on desktop. Which one is current?
This repo fixes that. Not with another knowledge management theory post, but with a drop-in engineering practice that works across AI platforms.
The Idea
Baseline from Andrej Karpathy's LLM Wiki pattern: the correct way to use LLMs is not Q&A, it's compilation.
raw/ (PDFs, Excel, client docs — immutable source material)
↓ LLM compiles
wiki/ (markdown — current consensus, continuously updated)
↓ LLM generates
code/ (the execution layer — a compiled artifact, not the truth)
Five rules:
- Compile-first — Don't just answer. Write conclusions into wiki pages.
- Writeback is mandatory — Every decision goes back to the wiki. Every single one.
- Wiki before RAG — Under ~100 docs (or ~80k tokens, measured by
scripts/wiki_size_report.py), LLM reads directly. No vector DB needed.
- Obsidian is replaceable, the paradigm is not — The engine is LLM + filesystem + markdown.
- Ideas outrank Code — Your wiki of decisions and formulas is worth more than the code it generates.
Works With Any AI
| Platform | Config File | Status |
|---|
| Claude Code | CLAUDE.md | Native plugin (claude plugin install Ss1024sS/llm-wiki) + template |
| Codex | AGENTS.md | Native skill + template |
| Cursor | .cursorrules | Template in UNIVERSAL.md |
| Windsurf | .windsurfrules | Template in UNIVERSAL.md |
| ChatGPT | Manual paste | Workflow in UNIVERSAL.md |
Just send your AI this link and tell it to read UNIVERSAL.md:
Read https://github.com/Ss1024sS/LLM-wiki/blob/main/UNIVERSAL.md and set up the knowledge system for this project.
What's In This Repo
LLM-wiki/
├── UNIVERSAL.md # Start here. Setup guide for any AI platform.
├── docs/
│ ├── knowledge-system-playbook.md # Full rationale (Chinese + English), provenance roadmap
│ └── ingest-pipeline.md # Low-token raw intake + stale detection flow
├── examples/
│ └── demo-project/ # What a bootstrapped project looks like after 3 sessions
│ ├── CLAUDE.md / AGENTS.md / .cursorrules / .windsurfrules
│ ├── docs/wiki/ # 5 wiki pages with realistic content
│ └── manifests/raw_sources.csv
├── .claude-plugin/ # Claude Code plugin manifest (1.3.0+)
├── commands/ # Plugin slash commands (/llm-wiki-bootstrap, /llm-wiki-status)
├── skills/
│ └── knowledge-system-bootstrap/
│ ├── scripts/bootstrap_knowledge_system.py # 240-line renderer (was 2.6k LOC pre-1.3.0)
│ └── templates/ # 33 standalone files: scripts, configs, wiki pages
├── tests/ # 35 pytest cases for bootstrap + every check
└── scripts/
├── bootstrap_knowledge_system.py # Wrapper — always call this one
└── install-codex-skill.sh
Quick Start
Option A: Tell your AI to do it
Read https://github.com/Ss1024sS/LLM-wiki/blob/main/UNIVERSAL.md and set up the knowledge system for this project.
Works with Claude Code, Codex, Cursor, Windsurf.
Option B: Run the bootstrap script
git clone https://github.com/Ss1024sS/LLM-wiki.git
cd LLM-wiki
# Preview what will be created (writes nothing)
python3 scripts/bootstrap_knowledge_system.py /path/to/your-project "My Project" --dry-run
# Run for real
python3 scripts/bootstrap_knowledge_system.py /path/to/your-project "My Project"
Always use scripts/bootstrap_knowledge_system.py (the root wrapper). Never call the one inside skills/ directly — that's the skill's internal copy.
Generates 33 files: wiki structure (8 pages with frontmatter), manifests + schema, raw intake + stale reporting scripts, manual delta-compile scaffolds, validation scripts (including wiki_size_report.py for quantitative RAG-threshold triage and provenance_check.py --ci for raw-free CI), Claude Code commands, CI workflow, and configs for 4 AI platforms.
--force re-runs back up existing files to <file>.bak.<timestamp> before overwriting (pass --no-backup to opt out).
Option C: Install as a Claude Code plugin (1.3.0+)
claude plugin install Ss1024sS/llm-wiki
Then use /llm-wiki-bootstrap to scaffold a project, or /llm-wiki-status to inspect an existing one.
Option D: Install as a Codex skill
Use $skill-installer to install https://github.com/Ss1024sS/LLM-wiki/tree/main/skills/knowledge-system-bootstrap
Or manually:
bash scripts/install-codex-skill.sh
How Do I Know It's Working?
After 3 sessions, check: