From potenlab-workflow
Runs Vitest tests for a specific phase chosen by the user. Reads test-plan.md to identify phases, asks user which phase to run via AskUserQuestion, executes only those test files, and generates test.result.json (replaced on every run). Uses qa-specialist to analyze failures. Triggers on: run test phase, test phase, run phase test, test specific phase.
npx claudepluginhub potenlab/marketplace-potenlab --plugin potenlab-workflowThis skill uses the workspace's default tool permissions.
Run Vitest tests for a specific phase from `test-plan.md`, collect results, and generate `test.result.json`.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Run Vitest tests for a specific phase from test-plan.md, collect results, and generate test.result.json.
Use /run-test-phase when:
Do NOT use when:
/run-test-all/generate-test/run-test-phase [phase]
|
v
+----------------------------------------------------------+
| STEP 1: Read test-plan.md |
| - Parse all phases and their features/test files |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 2: Get phase choice |
| - From argument: /run-test-phase 1 |
| - OR from AskUserQuestion with phase list |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 3: Resolve test files for the chosen phase |
| - Map phase → features → test files |
| - tests/features/{name}/**/*.test.ts |
| - tests/rls/{name}*.test.ts |
| - tests/constraints/{name}*.test.ts |
| - supabase/**/{name}*.test.ts |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 4: Run vitest with file filter |
| - npx vitest run {file1} {file2} ... --reporter=json |
| - Capture results for selected files only |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 5: Parse results & generate test.result.json |
| - Replaces previous test.result.json entirely |
| - Scoped to the chosen phase |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 6: Analyze failures (if any) |
| - Spawn qa-specialist for failure analysis |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 7: Report results |
+----------------------------------------------------------+
Glob: **/test-plan.md OR **/test.plan.md
Read: [found path]
Parse the test plan to extract:
If test-plan.md does NOT exist:
test-plan.mdnot found. Cannot determine phases. Use/run-test-allto run everything, or create a test-plan.md first.
STOP. Do NOT proceed.
If the user provides a phase:
/run-test-phase 1 → Run Phase 1 tests
/run-test-phase auth → Run tests for the auth phase/feature
/run-test-phase 3 → Run Phase 3 tests
Extract the phase directly. Do NOT ask questions — proceed to Step 3.
Read test-plan.md to build the phase list dynamically, then ask:
AskUserQuestion:
question: "Which phase do you want to run tests for?"
header: "Test Phase"
options:
- label: "Phase 1: {name}"
description: "{feature_count} features, {test_count} test files"
- label: "Phase 2: {name}"
description: "{feature_count} features, {test_count} test files"
- label: "Phase 3: {name}"
description: "{feature_count} features, {test_count} test files"
Build options from the actual phases found in test-plan.md. Show up to 4 phases. If more exist, add "Other" for custom input.
{chosen_phase}From test-plan.md, extract the features belonging to the chosen phase:
Phase 1: Auth → features: [auth, profiles]
Phase 2: Orders → features: [orders, payments, invoices]
For each feature in the phase, search for test files:
Glob: tests/features/{feature}/**/*.test.ts
Glob: tests/rls/{feature}*.test.ts
Glob: tests/constraints/{feature}*.test.ts
Glob: supabase/**/{feature}*.test.ts
Collect ALL matching test files into {phase_test_files}.
If no test files found for the chosen phase:
No test files found for Phase {N}. Features: {feature_list}. Run
/generate-test {feature}to generate tests first.
STOP.
Report discovery:
### Phase {N}: {name} — Test Files
| Feature | Test Files |
|---------|-----------|
| {feature} | {file1}, {file2} |
| {feature} | {file1} |
| **Total** | **{total_files} files** |
npx supabase status
If not running, warn and ask (same as /run-test-all):
AskUserQuestion:
question: "Supabase local doesn't seem to be running. Proceed anyway?"
header: "Supabase"
options:
- label: "Run tests anyway"
description: "Some tests may fail due to missing database connection"
- label: "Stop — I'll start Supabase first"
description: "I'll run npx supabase start and come back"
Run ONLY the phase's test files:
npx vitest run {file1} {file2} {file3} --reporter=json --reporter=default --outputFile=docs/vitest-raw-output.json 2>&1
If the file list is very long (>10 files), use a glob pattern instead:
npx vitest run "tests/features/{feature1}/**/*.test.ts" "tests/features/{feature2}/**/*.test.ts" --reporter=json --reporter=default --outputFile=docs/vitest-raw-output.json 2>&1
STOP. Tell user:
"Vitest is not installed. Run: npm install -D vitest"
Read: docs/vitest-raw-output.json
ALWAYS replace the file entirely — never append.
{
"generated": "2026-02-10T12:00:00.000Z",
"command": "run-test-phase",
"scope": {
"phase": "Phase 1: Auth",
"phase_number": 1,
"features": ["auth", "profiles"]
},
"duration_ms": 4567,
"summary": {
"total_suites": 4,
"passed_suites": 3,
"failed_suites": 1,
"total_tests": 35,
"passed": 32,
"failed": 2,
"skipped": 1,
"pass_rate": "91.4%"
},
"by_feature": {
"auth": {
"suites": 2,
"tests": 20,
"passed": 18,
"failed": 2,
"skipped": 0
},
"profiles": {
"suites": 2,
"tests": 15,
"passed": 14,
"failed": 0,
"skipped": 1
}
},
"suites": [
{
"file": "tests/features/auth/auth.test.ts",
"feature": "auth",
"status": "fail",
"duration_ms": 1234,
"tests": {
"total": 12,
"passed": 10,
"failed": 2,
"skipped": 0
},
"failures": [
{
"name": "Auth CRUD > should create user with valid data",
"error": "Expected null, received { code: '23502', message: '...' }",
"line": 45
},
{
"name": "Auth RLS > should deny other user access",
"error": "Expected 0, received 1",
"line": 78
}
]
},
{
"file": "tests/features/auth/auth-rls.test.ts",
"feature": "auth",
"status": "pass",
"duration_ms": 890,
"tests": {
"total": 8,
"passed": 8,
"failed": 0,
"skipped": 0
},
"failures": []
},
{
"file": "tests/features/profiles/profiles.test.ts",
"feature": "profiles",
"status": "pass",
"duration_ms": 678,
"tests": {
"total": 15,
"passed": 14,
"failed": 0,
"skipped": 1
},
"failures": []
}
]
}
Write: docs/test.result.json
rm docs/vitest-raw-output.json
If ALL tests passed → skip this step.
If there are failures, spawn a qa-specialist agent:
Task:
subagent_type: qa-specialist
description: "Analyze Phase {N} test failures"
prompt: |
Analyze the following test failures from Phase {N}: {phase_name}.
Read context:
- docs/test.result.json (test results with failure details)
- references/vitest-best-practices.md (testing rules)
- The failing test files (read each one)
- The source files being tested (read each one)
- backend-plan.md (schema and RLS context for this phase)
For each failure:
1. Read the failing test file and the line that failed
2. Read the source code being tested
3. Determine root cause:
- Is the test wrong? (assertion mismatch, wrong expectation)
- Is the source code wrong? (bug in implementation)
- Is the schema wrong? (missing column, wrong constraint)
- Is RLS wrong? (policy too permissive or too restrictive)
- Is test data wrong? (missing seed data, wrong setup)
4. Provide a specific fix recommendation
Return a structured analysis:
FAILURE ANALYSIS:
---
Test: {test name}
File: {file path}:{line}
Feature: {feature name}
Root Cause: {test_bug | source_bug | schema_issue | rls_issue | data_issue}
Explanation: {what went wrong}
Fix: {specific action to fix it}
---
[repeat for each failure]
SUMMARY:
- {N} test bugs (fix the test)
- {N} source bugs (fix the implementation)
- {N} schema issues (fix the migration)
- {N} RLS issues (fix the policy)
- {N} data issues (fix seed/setup)
Do NOT modify any files. Analysis only.
## Test Results — Phase {N}: {name}
**Run at:** {timestamp}
**Duration:** {duration_ms}ms
**Scope:** Phase {N} — {feature_list}
**Command:** `npx vitest run {files}`
### Summary
| Metric | Count |
|--------|-------|
| Test Suites | {total_suites} |
| Passed Suites | {passed_suites} |
| Failed Suites | {failed_suites} |
| Total Tests | {total_tests} |
| Passed | {passed} |
| Failed | {failed} |
| Skipped | {skipped} |
| **Pass Rate** | **{pass_rate}** |
### Results by Feature
| Feature | Suites | Tests | Passed | Failed | Status |
|---------|--------|-------|--------|--------|--------|
| {feature} | {suites} | {tests} | {passed} | {failed} | {pass/fail} |
| {feature} | {suites} | {tests} | {passed} | {failed} | {pass/fail} |
### Failures (if any)
| Test | File | Feature | Root Cause | Fix |
|------|------|---------|------------|-----|
| {test name} | {file}:{line} | {feature} | {cause} | {fix} |
### Output
- **test.result.json:** `docs/test.result.json` (replaced)
### Next Steps
1. Fix failing tests using the failure analysis above
2. Re-run this phase: `/run-test-phase {N}`
3. Run all tests: `/run-test-all`
4. Run a different phase: `/run-test-phase`
5. Generate more tests: `/generate-test {feature}`
STOP. Tell user:
"test-plan.md not found. Cannot determine phases.
Use /run-test-all to run everything, or create a test-plan.md first."
Tell user: "Phase {N} not found in test-plan.md. Available phases: {list}."
Use AskUserQuestion to let them pick a valid phase.
Tell user: "No test files found for Phase {N} features: {list}.
Run /generate-test {feature} to generate tests first."
STOP.
Warn and ask if user wants to proceed or start Supabase first.
STOP. Tell user: "Vitest not installed. Run: npm install -D vitest"
1. Report which test file crashed
2. Still process results from files that completed
3. Mark crashed suites in test.result.json with status: "error"
4. Suggest checking the test file for syntax errors