From rapid
Run user acceptance testing on a scoped set -- reads REVIEW-SCOPE.md
npx claudepluginhub pragnition/pragnition-public-plugins --plugin rapidThis skill is limited to using the following tools:
You are the RAPID UAT skill. This skill runs user acceptance testing on a scoped set. It reads `REVIEW-SCOPE.md` (produced by `/rapid:review`) as its input. UAT runs ONCE on the full scope -- it is never chunked or concern-scoped. Follow these steps IN ORDER. Do not skip steps. Do NOT include stage selection prompting, unit test logic, or bug hunt logic.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
You are the RAPID UAT skill. This skill runs user acceptance testing on a scoped set. It reads REVIEW-SCOPE.md (produced by /rapid:review) as its input. UAT runs ONCE on the full scope -- it is never chunked or concern-scoped. Follow these steps IN ORDER. Do not skip steps. Do NOT include stage selection prompting, unit test logic, or bug hunt logic.
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
node "${RAPID_TOOLS}" display banner uat
The user invokes this skill with: /rapid:uat <set-id> or numeric shorthand like /rapid:uat 1.
If <set-id> was provided, resolve it through the numeric ID resolver:
# (env preamble here)
RESOLVE_RESULT=$(node "${RAPID_TOOLS}" resolve set "<set-input>" 2>&1)
RESOLVE_EXIT=$?
if [ $RESOLVE_EXIT -ne 0 ]; then
echo "$RESOLVE_RESULT"
# Display the error message from the JSON and STOP
fi
SET_NAME=$(echo "$RESOLVE_RESULT" | node -e "d=JSON.parse(require('fs').readFileSync(0,'utf-8')); console.log(d.resolvedId)")
Use SET_NAME for all subsequent operations.
If <set-id> was not provided, use AskUserQuestion to ask:
node "${RAPID_TOOLS}" state get --all
--post-merge flagCheck if the user invoked with --post-merge flag: /rapid:uat <set-id> --post-merge
If --post-merge is present, set POST_MERGE=true. Post-merge mode reads REVIEW-SCOPE.md from the post-merge artifact directory.
If POST_MERGE=true: Skip this step. Post-merge reviews do not track pipeline state.
Check if UAT has already been completed for this set:
REVIEW_STATE=$(node "${RAPID_TOOLS}" review state "${SET_NAME}" 2>&1)
Parse the JSON output. If the response contains a stages array, find the uat entry.
If uat is already complete (status: "complete"), use AskUserQuestion to prompt:
If user chooses "Skip and exit": print "UAT stage already complete." and exit. If user chooses "Re-run UAT": continue to Step 1.
If scope is not complete (scope entry missing or status not "complete") and POST_MERGE is not set:
If unit-test is not complete (unit-test entry missing or status not "complete") and POST_MERGE is not set:
If no review state exists and POST_MERGE is not set:
Determine the scope file path:
If POST_MERGE=true (explicit --post-merge flag was provided): Use .planning/post-merge/{setId}/REVIEW-SCOPE.md directly.
If POST_MERGE is not set (no flag): Auto-detect by checking paths in order:
.planning/sets/{setId}/REVIEW-SCOPE.md.planning/post-merge/{setId}/REVIEW-SCOPE.mdPOST_MERGE=true so downstream artifact writes (REVIEW-UAT.md, issue logging) use the post-merge directory.Guard check: If neither path contains the file, display error and STOP:
[RAPID ERROR] REVIEW-SCOPE.md not found for set '{setId}'.
Checked: .planning/sets/{setId}/REVIEW-SCOPE.md
.planning/post-merge/{setId}/REVIEW-SCOPE.md
Run `/rapid:review {setId}` first to generate the review scope.
Read the file content. Parse the <!-- SCOPE-META {...} --> JSON block to extract metadata:
setId, date, postMerge, worktreePath, totalFiles, useConcernScopingParse the following sections from REVIEW-SCOPE.md:
Extract ALL file paths from the ## Changed Files table. UAT uses the full file list.
Extract ALL file paths from the ## Dependent Files table.
Extract criteria from the ## Acceptance Criteria numbered list. These are the primary input for UAT.
Important: UAT operates on the FULL scope (all changed files + all dependent files). It does NOT split by concern groups or directory chunks. This is by design -- acceptance testing evaluates the set holistically.
The acceptance criteria extracted in Step 2 are the primary test basis. Each criterion originates from a wave's PLAN.md and is prefixed with [wave-N].
If the set has a CONTEXT.md file (.planning/sets/{setId}/CONTEXT.md), read it to understand the set's implementation decisions and domain context. This helps the UAT agent generate more targeted test scenarios.
Spawn a single rapid-uat agent with the full scope:
User acceptance testing for set '{setId}' -- Test Plan Generation.
## All Changed Files
{complete list of changed files}
## All Dependent Files
{complete list of dependent files}
## Acceptance Criteria
{numbered list of acceptance criteria with wave attribution}
## Set Context
{CONTEXT.md content if available, or 'No additional context available.'}
## Working Directory
{worktreePath from SCOPE-META, or cwd if post-merge}
## Instructions
Generate a comprehensive UAT test plan with detailed step-by-step human verification instructions for each acceptance criterion.
For each acceptance criterion:
1. Create one or more test scenarios
2. For each scenario, specify:
- name: descriptive test name
- criterion: which acceptance criterion it validates (e.g., "[wave-1] Criterion text")
- steps: array of objects, each with:
- instruction: specific human-actionable instruction (e.g., "Navigate to http://localhost:3000/login in your web portal")
- expected: specific observable outcome (e.g., "The page displays a login form with email and password fields")
- files: which source files are relevant
Group scenarios under the acceptance criterion they validate. Duplicate cross-cutting scenarios under each criterion they touch.
Return via:
<!-- RAPID:RETURN {"status":"COMPLETE","data":{"testPlan":[{"name":"...","criterion":"[wave-N] ...","steps":[{"instruction":"...","expected":"..."}],"files":["..."]}]}} -->
Display the test plan summary:
--- UAT Test Plan ---
Set: {setId}
Total Scenarios: {count}
Total Steps: {totalSteps}
Scenario 1: {name}
Criterion: {criterion}
Steps: {step count} steps
Files: {file list}
Scenario 2: {name}
Criterion: {criterion}
Steps: {step count} steps
Files: {file list}
---------------------
Use AskUserQuestion to present the plan:
question: "Approve the UAT test plan?"
options: ["Approve", "Modify", "Skip"]
Approve: Proceed to Step 6
Modify: Allow user to request changes, re-spawn agent with modifications, re-display
Skip: Skip to Step 7 (artifact writing) with empty results
This is the core interaction loop. The skill drives a sequential walk through ALL steps of ALL scenarios, collecting human verdicts one at a time.
Flatten all scenarios' steps into a sequential list with metadata:
{scenarioName, criterion, stepIndex, totalSteps, instruction, expected, files}
Initialize counters: passed = 0, failed = 0, skipped = 0, failures = []
For each step in the flattened list:
a. Use AskUserQuestion:
"Step {globalIndex}/{totalGlobalSteps} -- {criterion}\n\n**Scenario:** {scenarioName}\n\n**{instruction}**\n\nExpected: {expected}\n\nDoes this pass?"["Pass", "Fail", "Skip", "Pass all remaining"]b. On Pass: increment passed, continue to next step.
c. On Fail: increment failed, then prompt for severity:
"Step {globalIndex} FAILED.\n\nWhat severity level?"["Critical", "High", "Medium", "Low"]{
"id": "<setId>-uat-<globalIndex>",
"criterion": "<criterion text>",
"step": "<instruction>",
"description": "Expected: <expected>. Step: <instruction>",
"severity": "<userChoice lowercase>",
"relevantFiles": ["<files array>"],
"userNotes": "",
"expectedBehavior": "<expected>",
"actualBehavior": "Failed (human reported)"
}
failures array.d. On Skip: increment skipped, continue to next step.
e. On Pass all remaining: set all remaining steps to passed (add remaining count to passed), break the loop. This is the escape hatch for large test plans.
userNotes field is set to empty string because freeform input is not available.description field is auto-populated from the step's instruction and expected fields.actualBehavior is set to "Failed (human reported)" as the default.Write the UAT results to:
.planning/sets/{setId}/REVIEW-UAT.md.planning/post-merge/{setId}/REVIEW-UAT.mdThis write is idempotent -- if REVIEW-UAT.md already exists, overwrite it.
Format:
# REVIEW-UAT: {setId}
## Summary
| Metric | Value |
|--------|-------|
| Total Scenarios | {count} |
| Passed | {passed} |
| Failed | {failed} |
| Skipped | {skipped} |
| Human Verified | {passed + failed + skipped} |
## Criteria Coverage
| Criterion | Scenarios | Status |
|-----------|-----------|--------|
| [wave-1] Criterion text | 2 | PASS |
| [wave-2] Criterion text | 1 | FAIL |
## Scenario Results
### PASS: {scenario name}
- **Criterion:** {criterion}
- **Steps:** {passed}/{total}
### FAIL: {scenario name}
- **Criterion:** {criterion}
- **Steps:** {passed}/{total}
- **Failed Steps:**
- Step 3: {step description} -- {error}
## Failed Scenarios Detail
### {scenario name}
- **Criterion:** {criterion}
- **Error:** {error description}
- **Relevant Files:** {file list}
Only if failures.length > 0. If there are no failures, do NOT write UAT-FAILURES.md (no file = no failures).
Write to:
.planning/sets/{setId}/UAT-FAILURES.md.planning/post-merge/{setId}/UAT-FAILURES.mdIf re-running and a previous UAT-FAILURES.md exists, overwrite it (clean overwrite).
The content follows the format locked in Wave 1:
# UAT-FAILURES
<!-- UAT-FORMAT:v2 -->
<!-- UAT-FAILURES-META {JSON object with "failures" array} -->
## Failures
### {id}: {criterion}
- **Step:** {step instruction}
- **Severity:** {severity}
- **Description:** {description}
- **User Notes:** {userNotes, if non-empty}
The <!-- UAT-FAILURES-META --> block contains a JSON object with a failures array. Each failure object has these fields:
id: Unique identifier (<setId>-uat-<stepIndex>)criterion: The acceptance criterion textstep: The step instruction that faileddescription: Combined description ("Expected: {expected}. Step: {instruction}")severity: One of critical, high, medium, lowrelevantFiles: Array of relevant source file pathsuserNotes: Empty string (freeform input not available via AskUserQuestion)expectedBehavior: The expected field from the stepactualBehavior: "Failed (human reported)"For each failed UAT scenario, log an issue.
CLI Flags (recommended):
node "${RAPID_TOOLS}" review log-issue "{setId}" \
--type "uat" \
--severity "{severity from human's choice in Step 6}" \
--file "{primary relevant file}" \
--description "UAT failure: {scenario name} -- {failure detail}" \
--source "uat"
Stdin JSON alternative:
echo '{"id":"<uuid>","type":"uat","severity":"{severity}","file":"{primary relevant file}","description":"UAT failure: {scenario name} -- {failure detail}","source":"uat","createdAt":"<iso-timestamp>"}' | \
node "${RAPID_TOOLS}" review log-issue "{setId}"
The CLI flag interface auto-generates id and createdAt. The stdin JSON interface requires all fields including id and createdAt.
Use the severity the human selected in Step 6 for each failure. Do not apply a heuristic -- the human's verdict is authoritative.
If in post-merge mode, issues are logged to .planning/post-merge/{setId}/REVIEW-ISSUES.json.
If POST_MERGE=true: Skip this step. Post-merge reviews do not track pipeline state.
Determine the verdict based on UAT results:
passpartialfailMark the UAT stage as complete:
node "${RAPID_TOOLS}" review mark-stage "${SET_NAME}" uat {verdict}
Print the completion banner:
--- RAPID UAT Complete ---
Set: {setId}{postMerge ? ' (post-merge)' : ''}
Scenarios: {passed}/{total} passed
Failed: {failed}
Skipped: {skipped}
Human Verified: {passed + failed + skipped}
Criteria Coverage: {coveredCriteria}/{totalCriteria}
Issues Logged: {issueCount}
Failures Logged: {failures.length}
Output: {path to REVIEW-UAT.md}
Next steps:
/rapid:bug-hunt {setIndex} -- Run adversarial bug hunt
/rapid:review summary {setIndex} -- Generate review summary
----------------------------
Display the completion footer:
if [ -z "${RAPID_TOOLS:-}" ] && [ -n "${CLAUDE_SKILL_DIR:-}" ] && [ -f "${CLAUDE_SKILL_DIR}/../../.env" ]; then export $(grep -v '^#' "${CLAUDE_SKILL_DIR}/../../.env" | xargs); fi
if [ -z "${RAPID_TOOLS}" ]; then echo "[RAPID ERROR] RAPID_TOOLS is not set. Run /rapid:install or ./setup.sh to configure RAPID."; exit 1; fi
node "${RAPID_TOOLS}" display footer "/rapid:review summary {setIndex}" --breadcrumb "review [done] > unit-test [done] > bug-hunt [done] > uat [done]"
Then exit. Do NOT prompt for stage selection.
/rapid:uat overwrites REVIEW-UAT.md and UAT-FAILURES.md. Previous results are not accumulated. Git history preserves previous versions./rapid:review. If the scope is stale, re-run /rapid:review first.