From rapid
Run unit test pipeline on a scoped set -- reads REVIEW-SCOPE.md
npx claudepluginhub pragnition/pragnition-public-plugins --plugin rapidThis skill is limited to using the following tools:
You are the RAPID unit test skill. This skill runs the unit test pipeline on a scoped set. It reads `REVIEW-SCOPE.md` (produced by `/rapid:review`) as its input. Follow these steps IN ORDER. Do not skip steps. Do NOT include stage selection prompting, bug hunt logic, or UAT logic.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
You are the RAPID unit test skill. This skill runs the unit test pipeline on a scoped set. It reads REVIEW-SCOPE.md (produced by /rapid:review) as its input. Follow these steps IN ORDER. Do not skip steps. Do NOT include stage selection prompting, bug hunt logic, or UAT logic.
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
node "${RAPID_TOOLS}" display banner unit-test
# Load testFrameworks from config.json
CONFIG_PATH=".planning/config.json"
if [ -f "$CONFIG_PATH" ]; then
TEST_FRAMEWORKS=$(node -e "const c=JSON.parse(require('fs').readFileSync('$CONFIG_PATH','utf-8')); console.log(JSON.stringify(c.testFrameworks || []))")
else
TEST_FRAMEWORKS="[]"
fi
The skill uses TEST_FRAMEWORKS to select the appropriate runner for each file being tested. Runner selection logic:
TEST_FRAMEWORKS by langrunner and frameworkThe user invokes this skill with: /rapid:unit-test <set-id> or numeric shorthand like /rapid:unit-test 1.
If <set-id> was provided, resolve it through the numeric ID resolver:
# (env preamble here)
RESOLVE_RESULT=$(node "${RAPID_TOOLS}" resolve set "<set-input>" 2>&1)
RESOLVE_EXIT=$?
if [ $RESOLVE_EXIT -ne 0 ]; then
echo "$RESOLVE_RESULT"
# Display the error message from the JSON and STOP
fi
SET_NAME=$(echo "$RESOLVE_RESULT" | node -e "d=JSON.parse(require('fs').readFileSync(0,'utf-8')); console.log(d.resolvedId)")
Use SET_NAME for all subsequent operations.
If <set-id> was not provided, use AskUserQuestion to ask:
node "${RAPID_TOOLS}" state get --all
--post-merge flagCheck if the user invoked with --post-merge flag: /rapid:unit-test <set-id> --post-merge
If --post-merge is present, set POST_MERGE=true. Post-merge mode reads REVIEW-SCOPE.md from the post-merge artifact directory.
If POST_MERGE=true: Skip this step. Post-merge reviews do not track pipeline state.
Check if unit-test has already been completed for this set:
REVIEW_STATE=$(node "${RAPID_TOOLS}" review state "${SET_NAME}" 2>&1)
Parse the JSON output. If the response contains a stages array, find the unit-test entry.
If unit-test is already complete (status: "complete"), use AskUserQuestion to prompt:
If user chooses "Skip and exit": print "Unit test stage already complete." and exit. If user chooses "Re-run unit tests": continue to Step 1.
If scope is not complete (scope entry missing or status not "complete") and POST_MERGE is not set:
If no review state exists and POST_MERGE is not set:
Determine the scope file path:
If POST_MERGE=true (explicit --post-merge flag was provided): Use .planning/post-merge/{setId}/REVIEW-SCOPE.md directly.
If POST_MERGE is not set (no flag): Auto-detect by checking paths in order:
.planning/sets/{setId}/REVIEW-SCOPE.md.planning/post-merge/{setId}/REVIEW-SCOPE.mdPOST_MERGE=true so downstream artifact writes (REVIEW-UNIT.md, issue logging) use the post-merge directory.Guard check: If neither path contains the file, display error and STOP:
[RAPID ERROR] REVIEW-SCOPE.md not found for set '{setId}'.
Checked: .planning/sets/{setId}/REVIEW-SCOPE.md
.planning/post-merge/{setId}/REVIEW-SCOPE.md
Run `/rapid:review {setId}` first to generate the review scope.
Read the file content. Parse the <!-- SCOPE-META {...} --> JSON block to extract metadata:
setId, date, postMerge, worktreePath, totalFiles, useConcernScopingParse the following sections from REVIEW-SCOPE.md:
Extract file paths and wave attribution from the ## Changed Files table. Each row has | file | wave | format.
Extract file paths from the ## Dependent Files table.
Extract chunks from ## Directory Chunks section. Each chunk is a ### Chunk N: dirName subsection with bulleted file lists.
If useConcernScoping is true in SCOPE-META, parse the ## Concern Scoping section:
### ConcernName subsection with file lists)### Cross-Cutting FilesExtract criteria from the ## Acceptance Criteria numbered list.
Determine the agent dispatch strategy based on scope:
useConcernScoping = true):ceil(totalGroups / 3) (always approximately 3 batches regardless of group count).rapid-unit-tester agent per concern group in the batchunit-test-{concernName} (kebab-case concern name)rapid-unit-tester agent with all filesrapid-unit-tester agent per directory chunkunit-test-chunk-{N}Agent prompt template for each agent:
Unit test set '{setId}' -- {concern/chunk description}.
## Files to Test
{file list for this concern/chunk}
## Working Directory
{worktreePath from SCOPE-META, or cwd if post-merge}
## Acceptance Criteria
{relevant acceptance criteria}
## Instructions
1. Read all files in your scope
2. Generate a test plan: for each file, list what functions/behaviors to test
3. Return the test plan via:
<!-- RAPID:RETURN {"status":"COMPLETE","phase":"plan","data":{"testPlan":[...]}} -->
Each test plan entry should include:
- file: the file path
- tests: array of { name, description, type } where type is 'unit' or 'integration'
Collect test plans from all agents. Merge into a combined test plan.
Display the combined test plan grouped by concern/chunk:
--- Unit Test Plan ---
Set: {setId}
Total tests: {count}
[Concern/Chunk 1: name]
- file.cjs: testFunctionA (unit), testFunctionB (unit)
[Concern/Chunk 2: name]
- other.cjs: testIntegration (integration)
-----------------------
Use AskUserQuestion to present the plan:
question: "Approve the unit test plan?"
options: ["Approve", "Modify", "Skip"]
Approve: Proceed to Step 5
Modify: Ask user for modifications, update the plan, re-display
Skip: Skip to Step 8 with empty results
Spawn rapid-unit-tester agents for the execution phase. Use the same dispatch strategy as Step 3 (concern-scoped or chunk-scoped).
Agent prompt template for execution:
Execute unit tests for set '{setId}' -- {concern/chunk description}.
## Files to Test
{file list}
## Test Plan
{approved test plan for this concern/chunk}
## Working Directory
{worktreePath or cwd}
## Instructions
1. Write test files using the `{framework}` framework
2. Test file naming: `{originalFile}.test.{ext}` in the same directory (use the appropriate extension for the language: .cjs for JS, .test.py for Python, _test.go for Go, etc.)
3. Run tests with: {runner} {testFile}
4. Return results via:
<!-- RAPID:RETURN {"status":"COMPLETE","phase":"execute","data":{"results":[{"file":"...","testFile":"...","passed":N,"failed":N,"errors":[]}]}} -->
Merge results across all agents/chunks/concerns.
This step fires only if there are test failures in the merged results from Step 5. If all tests passed, skip directly to Step 6.
Retry limit: Up to 2 retries after the initial execution (3 total attempts maximum). Track retryCount starting at 0.
Display failure summary:
--- Unit Test Failures ---
Passed: {passed} | Failed: {failed}
Failed tests:
- {testFile}: {testName} -- {error summary}
Use AskUserQuestion:
If user chooses "Retry...":
rapid-test-fixer agent with the failed test details:Fix failing unit tests for set '{setId}' -- Retry attempt {retryCount + 1}.
## Failed Tests
{JSON array of failed test results with testFile, testName, error}
## Working Directory
{worktreePath or cwd}
## Instructions
1. Read each failing test file
2. Fix the TEST CODE ONLY -- do NOT modify the source code under test
3. Common fixes: incorrect assertions, wrong mock setup, missing test fixtures, async handling errors
4. Re-run each fixed test with: {runner} {testFile}
5. Return via:
<!-- RAPID:RETURN {"status":"COMPLETE","data":{"results":[{"file":"...","testFile":"...","passed":N,"failed":N,"errors":[]}]}} -->
retryCountretryCount < 2, loop back to the top of Step 5a (display summary and ask again)retryCount >= 2, proceed to Step 6 with the best results achievedIf user chooses "Accept results as-is": Proceed to Step 6 with the current results.
Write the unit test results to:
.planning/sets/{setId}/REVIEW-UNIT.md.planning/post-merge/{setId}/REVIEW-UNIT.mdThis write is idempotent -- if REVIEW-UNIT.md already exists, overwrite it.
Format:
# REVIEW-UNIT: {setId}
## Summary
| Metric | Value |
|--------|-------|
| Total Tests | {count} |
| Passed | {passed} |
| Failed | {failed} |
| Coverage | {concern/chunk count} concern groups |
## Results by Concern/Chunk
### {ConcernName / Chunk N}
| Test File | Passed | Failed | Errors |
|-----------|--------|--------|--------|
| `file.test.cjs` | 5 | 1 | Error message |
## Failed Tests
### {TestName}
- **File:** `path/to/file.test.cjs`
- **Error:** Error description
- **Severity:** high/medium/low
## Test Files Created
- `path/to/file.test.cjs`
For each failed test, log an issue using the review log-issue CLI command.
CLI Flags (recommended):
node "${RAPID_TOOLS}" review log-issue "{setId}" \
--type "test" \
--severity "{severity}" \
--file "{testFile}" \
--description "{test failure description}" \
--source "unit-test"
Stdin JSON alternative:
echo '{"id":"<uuid>","type":"test","severity":"{severity}","file":"{testFile}","description":"{test failure description}","source":"unit-test","createdAt":"<iso-timestamp>"}' | \
node "${RAPID_TOOLS}" review log-issue "{setId}"
The CLI flag interface auto-generates id and createdAt. The stdin JSON interface requires all fields including id and createdAt.
If in post-merge mode, issues are logged to .planning/post-merge/{setId}/REVIEW-ISSUES.json.
If POST_MERGE=true: Skip this step. Post-merge reviews do not track pipeline state.
Determine the verdict based on test results:
passpartialfailMark the unit-test stage as complete:
node "${RAPID_TOOLS}" review mark-stage "${SET_NAME}" unit-test {verdict}
Where {verdict} is one of pass, fail, or partial based on the test results above.
Print the completion banner:
--- RAPID Unit Test Complete ---
Set: {setId}{postMerge ? ' (post-merge)' : ''}
Tests: {passed}/{total} passed
Failed: {failed}
Issues Logged: {issueCount}
Output: {path to REVIEW-UNIT.md}
Next steps:
/rapid:bug-hunt {setIndex} -- Run adversarial bug hunt
/rapid:uat {setIndex} -- Run user acceptance testing
/rapid:review summary {setIndex} -- Generate review summary
---------------------------------
Display the completion footer:
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
node "${RAPID_TOOLS}" display footer "/rapid:bug-hunt {setIndex}" --breadcrumb "review [done] > unit-test [done] > bug-hunt > uat"
Then exit. Do NOT prompt for stage selection.
/rapid:unit-test overwrites REVIEW-UNIT.md and any test files. Previous results are not accumulated./rapid:init and stored in .planning/config.json under testFrameworks. Each language uses its configured runner (e.g., node --test for JS, pytest for Python, cargo test for Rust, go test for Go). If no configuration exists for a language, the agent autonomously selects the best framework for that language./rapid:review. If the scope is stale, re-run /rapid:review first.