From caveman
Guides delegation to compressed subagents: investigator for code location, builder for 1-2 file edits, reviewer for diff checks to save main context in long sessions.
npx claudepluginhub juliusbrussee/caveman --plugin cavemanThis skill uses the workspace's default tool permissions.
Cavecrew = three subagent presets that emit caveman output. Same job as Anthropic defaults (`Explore`, edit-style agents, reviewer); difference is the tool-result they return is compressed, so main context shrinks per delegation.
Guides delegation to genshijin subagents (investigator for code locations, builder for 1-2 file edits, reviewer for diffs) instead of inline or vanilla Explore to compress outputs and save context tokens.
Orchestrates Codex agents for code implementation, file modifications, codebase research, security audits, testing, and multi-step execution workflows.
Leverages OpenAI Codex/GPT models for autonomous code implementation, reviews, and sandboxed task execution. Triggers on 'codex', 'use gpt', 'full-auto' etc.
Share bugs, ideas, or general feedback.
Cavecrew = three subagent presets that emit caveman output. Same job as Anthropic defaults (Explore, edit-style agents, reviewer); difference is the tool-result they return is compressed, so main context shrinks per delegation.
| Task | Use |
|---|---|
| "Where is X defined / what calls Y / list uses of Z" | cavecrew-investigator |
| Same but you also want suggestions/architecture commentary | Explore (vanilla) |
| Surgical edit, ≤2 files, scope obvious | cavecrew-builder |
| New feature / 3+ files / cross-cutting refactor | Main thread or feature-dev:code-architect |
| Review diff, branch, or file for bugs | cavecrew-reviewer |
| Deep code review with rationale + alternatives | Code Reviewer (vanilla) |
| One-line answer you already know | Main thread, no subagent |
Rule of thumb: if you'd want the subagent's output in 1/3 the tokens, pick cavecrew. If you'd want prose, pick vanilla.
Subagent tool results get injected into main context verbatim. A vanilla Explore that returns 2k tokens of prose costs 2k tokens of main-context budget every time. The same finding from cavecrew-investigator returns ~700 tokens. Across 20 delegations in one session that's the difference between context exhaustion and finishing the task.
What main thread can rely on per agent:
cavecrew-investigator
<Header>:
- path:line — `symbol` — short note
totals: <counts>.
Or No match. Always file-path-first, line-number-attached, backticked symbols. Safe to grep with path:\d+.
cavecrew-builder
<path:line-range> — <change ≤10 words>.
verified: <re-read OK | mismatch @ path:line>.
Or one of: too-big. / needs-confirm. / ambiguous. / regressed. (terminal first token).
cavecrew-reviewer
path:line: <emoji> <severity>: <problem>. <fix>.
totals: N🔴 N🟡 N🔵 N❓
Or No issues. Findings sorted file → line ascending.
Locate → fix → verify (most common):
cavecrew-investigator returns site list.cavecrew-builder.cavecrew-reviewer audits the diff.Parallel scout (when investigation is broad):
Spawn 2-3 cavecrew-investigator calls in one message (different angles: defs vs callers vs tests). Aggregate in main thread.
Single-shot edit (when site is already known):
Skip investigator. Hand exact path:line to cavecrew-builder directly.
cavecrew-builder when you don't already know the file. Spawn investigator first or main thread will eat tokens passing context.cavecrew-investigator → cavecrew-builder for a 5-file refactor. Builder will return too-big. and you'll have wasted a turn.cavecrew-reviewer for "general feedback" — it returns findings only, no architecture opinions. Use Code Reviewer for that.Subagents drop caveman → normal English for security warnings, irreversible-action confirmations, and any output where fragment ambiguity could be misread. Resume caveman after.