npx claudepluginhub sienklogic/plan-build-run --plugin pbrThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
STOP ā DO NOT READ THIS FILE. You are already reading it. This prompt was injected into your context by Claude Code's plugin system. Using the Read tool on this SKILL.md file wastes ~7,600 tokens. Begin executing Step 1 immediately.
You are the orchestrator for /pbr:test. This skill generates tests for code that was built WITHOUT TDD mode. It targets key files from completed phases and creates meaningful test coverage.
Reference: skills/shared/context-budget.md for the universal orchestrator rules.
Additionally for this skill:
key_files lists ā do not read full summariesBefore ANY tool calls, display this banner:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā PLAN-BUILD-RUN āŗ GENERATING TESTS FOR PHASE {N} ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Where {N} is the phase number from $ARGUMENTS. Then proceed to Step 1.
.planning/config.json exists.planning/phases/{NN}-{slug}/features.tdd_mode is false in config ā if TDD mode is enabled, warn user that tests should already exist and ask to proceed anyway)Parse $ARGUMENTS according to skills/shared/phase-argument-parsing.md.
| Argument | Meaning |
|---|---|
3 | Generate tests for phase 3 |
| (no number) | Use current phase from STATE.md |
CRITICAL: Run init command to load project state efficiently.
node "${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js" init execute-phase {phase_number}
This returns STATE.md snapshot, phase plans, ROADMAP excerpt, and config ā all in one call.
Scan the project root for test framework indicators:
package.json for jest, vitest, mocha, ava in devDependenciespytest.ini, pyproject.toml (with [tool.pytest]), setup.cfg (with [tool:pytest])jest.config.*, vitest.config.*, .mocharc.*tests/, test/, __tests__/, spec/*.test.*, *.spec.*, test_*.pyIf no test framework is detected, ask the user:
Use AskUserQuestion: question: "No test framework detected. Which should I use?" header: "Framework" options: - label: "Jest" description: "JavaScript/TypeScript testing (most common)" - label: "Vitest" description: "Vite-native testing (faster, ESM-friendly)" - label: "pytest" description: "Python testing framework" multiSelect: false
Read SUMMARY.md frontmatter from each plan in the phase to extract key_files:
node "${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js" frontmatter .planning/phases/{NN}-{slug}/SUMMARY.md
Collect all key_files across all plans in the phase. Filter to only source files (exclude config, docs, assets). Group by:
Present the file list to the user:
Use AskUserQuestion: question: "Found {N} source files from phase {P}. Generate tests for which?" header: "Scope" options: - label: "High priority only" description: "{X} files ā business logic, APIs, models" - label: "High + Medium" description: "{Y} files ā adds utilities and helpers" - label: "All files" description: "{Z} files ā comprehensive coverage" multiSelect: false
For each target file, create a lightweight test plan (NOT a full PBR PLAN.md ā just a task list):
File: src/auth/login.js
Tests to generate:
- Happy path: valid credentials return token
- Error: invalid password returns 401
- Error: missing email returns 400
- Edge: expired session handling
Framework: jest
Output: tests/auth/login.test.js
CRITICAL: Delegate ALL test writing to subagents. Do NOT write test code in the main context.
For each target file (or batch of related files), spawn an executor agent:
Spawn subagent_type: "pbr:executor"
Task: Generate tests for the following file(s):
<files_to_test>
{file_path}: {brief description from SUMMARY}
</files_to_test>
<test_framework>
{detected framework name and version}
Existing test directory: {path}
Test file naming: {pattern, e.g., *.test.js}
</test_framework>
<test_plan>
{test plan from Step 4}
</test_plan>
Instructions:
1. Read each source file to understand the implementation
2. Write test files following the project's existing test patterns
3. Each test file should cover: happy path, error cases, edge cases
4. Use the project's existing mocking patterns if any exist
5. Run the tests to verify they pass: {test command}
6. Commit with format: test({phase}-tests): add tests for {file}
Spawn up to parallelization.max_concurrent_agents agents in parallel for independent files.
After all agents complete, check results:
{test_command}
Display completion:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā PLAN-BUILD-RUN āŗ TESTS GENERATED ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Phase {N}: {X} test files created, {Y} tests passing
Files tested:
- src/auth/login.js ā tests/auth/login.test.js (8 tests)
- src/api/users.js ā tests/api/users.test.js (12 tests)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ā¶ NEXT UP ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
**Run coverage check** to see how much is covered
`npm test -- --coverage`
<sub>`/clear` first ā fresh context window</sub>
**Also available:**
- `/pbr:review {N}` ā verify the full phase
- `/pbr:continue` ā execute next logical step