From rune
Scans codebase to generate CLAUDE.md project config and .rune/ files including conventions, invariants, and developer guide. Use for new repos or missing context.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
Auto-generate project context for AI sessions. Scans the codebase and creates a CLAUDE.md project config plus .rune/ state directory so every future session starts with full context. Saves 10-20 minutes of re-explaining per session on undocumented projects.
Analyzes unfamiliar codebases to generate structured onboarding guides with architecture maps, key entry points, conventions, and starter CLAUDE.md.
Analyzes unfamiliar codebases to generate structured onboarding guides with architecture maps, key entry points, conventions, and starter CLAUDE.md. Use for new projects or initial Claude Code setup.
Initializes projects for Claude Code by generating CLAUDE.md with progressive disclosure docs, auto-format hooks, test infrastructure; scaffolds empty directories via stack tooling; audits/syncs docs. Supports monorepos/multi-repo git workspaces.
Share bugs, ideas, or general feedback.
Auto-generate project context for AI sessions. Scans the codebase and creates a CLAUDE.md project config plus .rune/ state directory so every future session starts with full context. Saves 10-20 minutes of re-explaining per session on undocumented projects.
/rune onboard — manual invocation on any projectrescue as Phase 0 (understand before refactoring)scout (L2): deep codebase scan — structure, frameworks, patterns, dependenciessentinel-env (L3): validate developer environment (runtime versions, required tools, env vars) so the onboarded project is actually runnableautopsy (L2): when project appears messy or undocumented — health assessment/rune onboard manual invocationrescue (L1): Phase 0 — understand legacy project before refactoringcook (L1): if no CLAUDE.md found, onboard firstproject/
├── CLAUDE.md # Project config for AI sessions (with invariants pointer block)
└── .rune/
├── conventions.md # Detected patterns & style
├── decisions.md # Empty, ready for session-bridge
├── progress.md # Empty, ready for session-bridge
├── session-log.md # Empty, ready for session-bridge
├── instincts.md # Empty, ready for session-bridge instinct learning
├── contract.md # Project invariants enforced by cook/sentinel
├── INVARIANTS.md # Danger zones + cross-file rules, consumed by logic-guardian
└── DEVELOPER-GUIDE.md # Human-readable onboarding for new developers
Invoke rune:scout on the project root. Collect:
package.json, pyproject.toml, Cargo.toml, go.mod, composer.json, .nvmrc, .python-version, Pipfile.lock, poetry.lock, uv.lock.venv/, venv/, conda-meta/, .python-versionmain.*, index.*, app.*, server.*.github/workflows/, Makefile, DockerfileDo not read every source file — scout gives the skeleton. Use Read only on config files and entry points.
From the scan output, determine with confidence:
.venv/ or venv/ directory → venvpoetry.lock → poetryuv.lock → uv.python-version → pyenvconda-meta/ or environment.yml → condaPipfile.lock → pipenvIf a field cannot be determined with confidence, write "unknown" — do not guess.
Read 3–5 representative source files (pick files with the most connections in the project — typically the main module, a route/controller file, and a utility file). Extract:
file.test.ts) | separate directory (tests/) | noneWrite extracted conventions as bullet points — be specific, not generic.
Use Write to create CLAUDE.md at the project root. Populate every section using data from Steps 2–3. Do not leave template placeholders — if data is unknown, write "unknown" or omit the section. Use the template below as the exact structure.
If a CLAUDE.md already exists, use Read to load it first, then merge — preserve any human-written sections (comments starting with <!-- manual -->) and update auto-detected sections only.
Use Bash to create the directory: mkdir -p .rune
Use Write to create each file:
.rune/conventions.md — paste the extracted conventions from Step 3 in full detail.rune/decisions.md — create with header # Architecture Decisions and one placeholder row in a markdown table (Date | Decision | Rationale | Status).rune/progress.md — create with header # Progress Log and one placeholder entry.rune/session-log.md — create with header # Session Log and current date as first entry.rune/instincts.md — create with header # Project Instincts and a description: "Learned trigger→action patterns. Managed by session-bridge. See session-bridge SKILL.md Step 5.7 for format.".rune/contract.md — generate a starter contract based on the detected tech stack:
docs/CONTRACT-TEMPLATE.mdno bare except, Node.js → add no console.log, SQL database → add parameterized queries rule)contract.operations for a library with no deployed service)Scan the project for rules that span files — the kind of mistake a linter cannot catch but a single agent edit can introduce. The goal is to seed .rune/INVARIANTS.md with ≥3 plausible rules so logic-guardian has something to enforce on day one.
Invoke the scanner directly:
node skills/onboard/scripts/onboard-invariants.js --root <project-root>
What it produces:
.rune/INVARIANTS.md — rendered from skills/onboard/references/invariants-template.md plus auto-detected rules in four buckets:
CLAUDE.md — adds a pointer block between <!-- @rune-invariants-pointer:start --> and <!-- @rune-invariants-pointer:end --> listing top danger-zone globs so every session sees them.Merge rules (safe re-runs):
.rune/INVARIANTS.md exists, user edits above ## Auto-detected (new) are never overwritten.## Auto-detected (new).<!-- @rune-invariants-pointer:skip --> anywhere in CLAUDE.md, the pointer block is not re-injected.Emit signal invariants.seeded with {danger_count, critical_count, state_count, cross_count} when done. session-bridge listens in Phase 3 to surface the loudest rules at session start.
Do not fabricate rules. If detection yields zero results, write _No new detections on this run._ under ## Auto-detected (new) and move on. A quiet INVARIANTS.md is better than fake rules the user has to prune.
If .rune/instincts.md already exists and contains instinct entries, read it and include a summary in the Onboard Report under ### Learned Instincts. This tells the agent what project-specific behaviors have been learned from previous sessions.
For each instinct with confidence ≥0.6, include in the report:
Instincts with confidence <0.6 are still learning — mention count but don't list individually.
Why: Onboard is the first skill that runs in a new session. Surfacing instincts here ensures the agent starts with project-specific learned behaviors, not just static conventions.
Use the data from Steps 2–3 to generate .rune/DEVELOPER-GUIDE.md — a human-readable onboarding guide for new team members joining the project. This is NOT AI context. This is plain English for humans.
Use Write to create .rune/DEVELOPER-GUIDE.md with this template:
# Developer Guide: [Project Name]
## What This Does
[2 sentences max. What problem does this project solve? Who uses it?]
## Quick Setup
[Copy-paste commands to get from zero to running locally]
```bash
# [Python projects] Activate virtual environment
[detected activation command — e.g., source .venv/bin/activate | poetry shell | uv venv && source .venv/bin/activate]
# Install dependencies
[detected command — e.g., pip install -e ".[dev]" | poetry install | npm install]
# Run development server
[detected command]
# Run tests
[detected command]
[5–10 most important files with one-line description each]
[path] — [what it does][test command][Top 3 "it doesn't work" situations with fixes. Only include issues you can infer from the codebase — e.g., missing .env, wrong Node version, database not running]
[Python projects — always include these if applicable:]
[activation command][install command]pip install -e .[If git log reveals consistent contributors, list them. Otherwise omit this section.]
If `.rune/DEVELOPER-GUIDE.md` already exists, skip and log **INFO**: "Skipped existing .rune/DEVELOPER-GUIDE.md — manual content preserved."
### Step 6c — Suggest L4 Extension Packs
Based on the detected tech stack from Step 2, recommend relevant L4 extension packs. Use the mapping table below to find applicable packs. Only suggest packs that match the detected stack — do not suggest all packs.
| Detected Stack | Suggest Pack | Why |
|----------------|-------------|-----|
| React, Next.js, Vue, Svelte, SvelteKit | `@rune/ui` | Frontend component patterns, design system, accessibility audit |
| Express, Fastify, FastAPI, Django, NestJS, Go HTTP | `@rune/backend` | API patterns, auth flows, middleware, rate limiting |
| Docker, GitHub Actions, Kubernetes, Terraform, CI/CD config | `@rune/devops` | Container patterns, deployment pipelines, infrastructure as code |
| React Native, Expo, Flutter, SwiftUI | `@rune/mobile` | Mobile architecture, navigation patterns, offline sync |
| Security-focused codebase (auth, payments, HIPAA/PCI markers) | `@rune/security` | Threat modeling, OWASP flows, compliance patterns |
| Trading, finance, pricing, portfolio, market data | `@rune/trading` | Market data validation, risk calculation, backtesting patterns |
| Subscription billing, tenant isolation, feature flags | `@rune/saas` | Multi-tenancy, billing integration, feature flag patterns |
| Cart, checkout, product catalog, inventory, payments | `@rune/ecommerce` | Cart patterns, payment flows, inventory management |
| ML models, training pipelines, embeddings, LLM integration | `@rune/ai-ml` | Model evaluation, prompt patterns, inference optimization |
| Game loop, physics, entity systems, multiplayer | `@rune/gamedev` | Game architecture, ECS patterns, netcode |
| CMS, blog, newsletter, SEO, content workflows | `@rune/content` | Content modeling, SEO patterns, editorial workflows |
| Analytics, dashboards, metrics, data pipelines, BI | `@rune/analytics` | Data modeling, visualization patterns, pipeline architecture |
If 0 packs match: omit this section from the report (no suggestions is correct for a generic project).
**Community pack discovery**: Also check if `.rune/community-packs/registry.json` exists. If it does, list installed community packs alongside core pack suggestions. If community packs are installed, include them under a `### Installed Community Packs` subsection.
If ≥1 packs match: include in the Onboard Report under a `### Suggested L4 Packs` section:
Based on your detected stack ([detected frameworks]), these extension packs may be useful:
### Step 6d — Context Budget Check
Audit the project's baseline context cost from MCP servers and agent configurations. This helps developers understand why their context window fills up faster than expected.
1. Count MCP tools available (from session start messages or `settings.json`)
2. Check CLAUDE.md line count
3. If total MCP tools >80 or CLAUDE.md >150 lines, include a **Context Budget Advisory** in the Onboard Report:
**Skip if**: Total MCP tools ≤80 AND CLAUDE.md ≤150 lines (healthy baseline).
### Step 6e — AI-Driven Interview (Optional, User-Initiated)
When invoked as `/rune onboard --interview` or when the project is too ambiguous for automated detection (e.g., no package.json, no clear entry point, mixed languages), switch to **conversational onboarding** — the AI asks targeted questions instead of relying solely on file scanning.
#### Interview Flow
Ask 5-8 questions in sequence, adapting based on answers. Start broad, narrow based on responses:
Q1: "What does this project do in one sentence?" → Captures purpose (README may be missing or outdated)
Q2: "Who uses this — internal team, external users, or both?" → Determines audience, affects DEVELOPER-GUIDE.md tone
Q3: "What's the main entry point — where does execution start?" → Bypasses file scanning for complex monorepos
Q4: "What commands do you use daily? (dev server, tests, build)" → Gets verified commands instead of guessing from config files
Q5: "Any areas of the codebase you'd warn a new developer about?" → Captures tribal knowledge that no scan can detect
Q6: "Are there external services this depends on? (databases, APIs, queues)" → Maps integration points for Architecture Map
Q7: "What's the deployment story — how does code get to production?" → Captures CI/CD context
Q8 (conditional): "Anything else a new session should know that's not in the code?" → Catches edge cases, workarounds, known issues
#### Interview Rules
- **Adapt**: Skip questions that were already answered by earlier responses. If Q1 reveals "it's a Next.js app", don't ask about the framework.
- **Validate**: Cross-reference answers with actual file scan results. If user says "we use Jest" but `vitest.config.ts` exists, ask to clarify.
- **Merge**: Interview answers supplement (not replace) automated scan. Scan provides facts, interview provides context and intent.
- **Store**: Save interview responses as high-confidence entries in `.rune/conventions.md` and `.rune/cumulative-notes.md` (tagged `[from-interview]`)
#### When to Auto-Suggest Interview
Suggest switching to interview mode (but don't force it) when:
- Step 2 produces 3+ "unknown" fields in tech stack detection
- Project has no README.md and no package.json/pyproject.toml/Cargo.toml
- Project appears to be a monorepo with 3+ distinct sub-projects
Output: `"ℹ️ This project is hard to auto-detect. Run /rune onboard --interview for guided setup."`
### Step 7 — Commit
Use `Bash` to stage and commit the generated files:
```bash
git add CLAUDE.md .rune/ && git commit -m "chore: initialize rune project context"
If git is not available or the directory is not a git repo, skip this step and add an INFO note to the report: "Not a git repository — files written but not committed."
If any of the .rune/ files already exist, do not overwrite them (they may contain human-written decisions). Log INFO: "Skipped existing .rune/[file] — manual content preserved."
# [Project Name] — Project Configuration
## Overview
[Auto-detected description from README or entry point comments]
## Tech Stack
- Framework: [detected]
- Language: [detected]
- Package Manager: [detected]
- Test Framework: [detected]
- Build Tool: [detected]
- Linter: [detected]
- Python Environment: [detected — venv/poetry/uv/conda/pyenv/pipenv/none] (only if Python project)
## Directory Structure
[Generated tree with one-line annotations per directory]
## Conventions
- Naming: [detected patterns — specific, not generic]
- Error handling: [detected pattern]
- State management: [detected pattern]
- API pattern: [detected pattern]
- Test structure: [detected pattern]
## Commands
- Install: [detected command]
- Dev: [detected command]
- Build: [detected command]
- Test: [detected command]
- Lint: [detected command]
## Key Files
- Entry point: [absolute path]
- Config: [absolute paths]
- Routes/API: [absolute paths]
## Onboard Report
- **Project**: [name] | **Framework**: [detected] | **Language**: [detected]
- **Files**: [count] | **LOC**: [estimate] | **Modules**: [count]
### Generated
- CLAUDE.md (project configuration)
- .rune/conventions.md (detected patterns)
- .rune/decisions.md (initialized)
- .rune/progress.md (initialized)
- .rune/session-log.md (initialized)
- .rune/DEVELOPER-GUIDE.md (human onboarding guide)
### Skipped (already exist)
- [list of files not overwritten]
### Learned Instincts (if any)
- [trigger] → [action] (confidence: [0.6-0.9]) — for each high-confidence instinct
- [N] low-confidence instincts still learning
### Observations
- [notable patterns or anomalies found]
- [potential issues detected]
- [recommendations for the developer]
### Suggested L4 Packs
- **@rune/[pack]** — [reason] (only shown if applicable packs detected)
Known failure modes for this skill. Check these before declaring done.
| Failure Mode | Severity | Mitigation |
|---|---|---|
| CLAUDE.md generated from README alone (no file scan) | CRITICAL | Step 1 MUST invoke scout — never skip actual file scanning |
| DEVELOPER-GUIDE.md contains generic placeholder text not derived from project | HIGH | Every section must reference actual detected commands, files, and patterns — no generic advice |
| Overwriting existing .rune/ files with manual content | CRITICAL | Check file existence before every Write — skip and log INFO if exists |
| Common Issues section fabricated (no actual issues detected) | MEDIUM | Only list issues inferable from codebase (missing .env, Node version, etc.) — omit section if none found |
| Artifact | Format | Location |
|---|---|---|
| Project AI config | Markdown | CLAUDE.md (project root) |
| Detected conventions | Markdown | .rune/conventions.md |
| Decision log (initialized) | Markdown | .rune/decisions.md |
| Developer onboarding guide | Markdown | .rune/DEVELOPER-GUIDE.md |
| Session/progress files | Markdown | .rune/progress.md, .rune/session-log.md |
~2000-5000 tokens input, ~1000-2000 tokens output. Sonnet for analysis quality.
Scope guardrail: onboard generates project context files — it does not modify source code, install dependencies, or change project configuration.