Creates minimal, high-signal CLAUDE.md and AGENTS.md context files for repos using empirical best practices. Triggers on /init, create/update requests, or missing context during brainstorming.
npx claudepluginhub repozy/superpowers-optimizedThis skill uses the workspace's default tool permissions.
Creates repository-level context files (`CLAUDE.md`, `AGENTS.md`) that give coding agents the minimum guidance needed to work correctly in a repo.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Creates repository-level context files (CLAUDE.md, AGENTS.md) that give coding agents the minimum guidance needed to work correctly in a repo.
Core principle: Only include what the agent cannot easily discover itself.
Empirical research (Gloaguen et al., 2026 — "Evaluating AGENTS.md") shows that LLM-generated context files decrease agent performance by 3% and increase cost by 20-23% when they contain redundant or overly broad content. Human-written, minimal context files improve performance by ~4%. The difference comes down to signal density — every unnecessary line adds cognitive load without helping the agent solve tasks.
Invoke this skill when any of the following occur:
/init command is runCLAUDE.md or AGENTS.mdCLAUDE.md and the user begins a new project setupbrainstorming or writing-plans when the repo lacks a context fileExplicit tool and command mentions are the single most effective instruction type — agents use mentioned tools 1.6x-2.5x more often. Spell out exact commands:
npm run test -- --watch
uv run pytest tests/ -x
make lint && make typecheck
Env vars, required services, secrets handling, database setup — things the agent would get wrong without being told.
Focus narrowly on constraints where violating them breaks something:
generated/ — they're overwritten by codegen"legacy/ module uses CommonJS — no ES imports"Only patterns unique to this project that differ from standard practice. If it's what any experienced developer would do by default, leave it out.
These categories have been empirically shown to provide zero benefit or actively hurt agent performance:
100% of LLM-generated context files included these, yet agents took identical steps to discover files whether the overview existed or not. The agent explores the repo anyway — an overview just adds tokens without saving any work.
Same finding: detailed directory structures don't help agents locate relevant files. They navigate codebases by searching, not by reading maps.
Broad architecture descriptions don't help agents solve tasks. If there's an architectural constraint that would cause incorrect behavior (e.g., "this is a monorepo — changes to packages/core require rebuilding all dependents"), include the constraint. Skip the explanation of how the architecture works.
Don't restate what's already in README, docs/, wiki, or inline comments. Redundancy with existing docs is actively harmful — when researchers removed documentation from repos, LLM-generated context files improved performance by 2.7%, proving the duplication was the problem.
"Write tests", "follow SOLID principles", "use meaningful variable names" — agents already know these. Only include project-specific deviations from standard practice.
Unnecessary requirements make tasks harder. Every rule you add has a cost — the agent spends reasoning tokens processing it and may over-apply it. Include a constraint only if violating it would cause a real problem in this specific repo.
package.json, tsconfig.json, Makefile, CI configs, etc.) and source structure.