From potenlab-workflow
Runs ALL Vitest tests across /features, /tests, and /supabase directories. Executes tests via npx vitest run, collects results, and generates test.result.json (replaced on every run). Uses qa-specialist to analyze failures and provide actionable feedback. Triggers on: run test all, run all tests, test all, vitest all, run tests.
npx claudepluginhub potenlab/marketplace-potenlab --plugin potenlab-workflowThis skill uses the workspace's default tool permissions.
Run every Vitest test file in the project, collect results, and generate `test.result.json`.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Run every Vitest test file in the project, collect results, and generate test.result.json.
Use /run-test-all when:
Do NOT use when:
/run-test-phase/generate-test/run-test-all
|
v
+----------------------------------------------------------+
| STEP 1: Discover all test files |
| - tests/**/*.test.ts |
| - supabase/**/*.test.ts |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 2: Read test-plan.md for context |
| - Map test files to phases/features |
| - Understand expected behavior per test |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 3: Run vitest |
| - npx vitest run --reporter=json --outputFile=... |
| - Capture all results (pass, fail, skip, duration) |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 4: Parse results & generate test.result.json |
| - Replaces previous test.result.json entirely |
| - Structured by feature/phase with pass/fail counts |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 5: Analyze failures (if any) |
| - Spawn qa-specialist to analyze failing tests |
| - Provide actionable fix suggestions |
+----------------------------------------------------------+
|
v
+----------------------------------------------------------+
| STEP 6: Report results |
+----------------------------------------------------------+
Find every test file in the project:
Glob: tests/**/*.test.ts
Glob: supabase/**/*.test.ts
If no test files found:
No test files found. Run
/generate-testfirst to generate test files from test-plan.md.
STOP. Do NOT proceed.
Report discovery:
### Test Files Found
| Directory | Files |
|-----------|-------|
| tests/features/ | {count} |
| tests/rls/ | {count} |
| tests/constraints/ | {count} |
| supabase/ | {count} |
| **Total** | **{total}** |
Glob: **/test-plan.md OR **/test.plan.md
Read: [found path]
Extract the phase-to-feature mapping so results can be grouped by phase in test.result.json.
If test-plan.md does not exist, warn but proceed:
test-plan.md not found. Results will be grouped by directory only, not by phase.
npx supabase status
If Supabase is NOT running, warn:
Warning: Supabase local does not appear to be running. Tests that connect to the database will fail. Start it with:
npx supabase start
Ask the user:
AskUserQuestion:
question: "Supabase local doesn't seem to be running. Proceed anyway?"
header: "Supabase"
options:
- label: "Run tests anyway"
description: "Some tests may fail due to missing database connection"
- label: "Stop — I'll start Supabase first"
description: "I'll run npx supabase start and come back"
Run ALL tests with JSON reporter:
npx vitest run --reporter=json --reporter=default --outputFile=docs/vitest-raw-output.json 2>&1
Capture:
STOP. Tell user:
"Vitest is not installed. Run: npm install -D vitest"
Read: docs/vitest-raw-output.json
Build test.result.json with this structure:
{
"generated": "2026-02-10T12:00:00.000Z",
"command": "run-test-all",
"scope": "all",
"duration_ms": 12345,
"summary": {
"total_suites": 10,
"passed_suites": 8,
"failed_suites": 2,
"total_tests": 85,
"passed": 78,
"failed": 5,
"skipped": 2,
"pass_rate": "91.8%"
},
"by_directory": {
"tests/features": {
"suites": 6,
"tests": 50,
"passed": 47,
"failed": 3,
"skipped": 0
},
"tests/rls+constraints": {
"suites": 3,
"tests": 25,
"passed": 23,
"failed": 2,
"skipped": 0
},
"supabase": {
"suites": 1,
"tests": 10,
"passed": 8,
"failed": 0,
"skipped": 2
}
},
"by_phase": {
"Phase 1: Auth": {
"suites": 2,
"tests": 20,
"passed": 18,
"failed": 2,
"features": ["auth"]
},
"Phase 2: Orders": {
"suites": 3,
"tests": 30,
"passed": 28,
"failed": 2,
"features": ["orders", "payments"]
}
},
"suites": [
{
"file": "tests/features/auth/auth.test.ts",
"status": "fail",
"duration_ms": 1234,
"tests": {
"total": 15,
"passed": 13,
"failed": 2,
"skipped": 0
},
"failures": [
{
"name": "Auth CRUD > should create user with valid data",
"error": "Expected null, received { code: '23502', message: '...' }",
"line": 45
},
{
"name": "Auth RLS > should deny other user access",
"error": "Expected 0, received 1",
"line": 78
}
]
},
{
"file": "tests/features/orders/orders.test.ts",
"status": "pass",
"duration_ms": 890,
"tests": {
"total": 20,
"passed": 20,
"failed": 0,
"skipped": 0
},
"failures": []
}
]
}
ALWAYS replace the file entirely — never append.
Write: docs/test.result.json
rm docs/vitest-raw-output.json
If ALL tests passed → skip this step.
If there are failures, spawn a qa-specialist agent to analyze:
Task:
subagent_type: qa-specialist
description: "Analyze test failures"
prompt: |
Analyze the following test failures and provide actionable fixes.
Read context:
- docs/test.result.json (test results with failure details)
- references/vitest-best-practices.md (testing rules)
- The failing test files (read each one)
- The source files being tested (read each one)
For each failure:
1. Read the failing test file and the line that failed
2. Read the source code being tested
3. Determine root cause:
- Is the test wrong? (assertion mismatch, wrong expectation)
- Is the source code wrong? (bug in implementation)
- Is the schema wrong? (missing column, wrong constraint)
- Is RLS wrong? (policy too permissive or too restrictive)
- Is test data wrong? (missing seed data, wrong setup)
4. Provide a specific fix recommendation
Return a structured analysis:
FAILURE ANALYSIS:
---
Test: {test name}
File: {file path}:{line}
Root Cause: {test_bug | source_bug | schema_issue | rls_issue | data_issue}
Explanation: {what went wrong}
Fix: {specific action to fix it}
---
[repeat for each failure]
SUMMARY:
- {N} test bugs (fix the test)
- {N} source bugs (fix the implementation)
- {N} schema issues (fix the migration)
- {N} RLS issues (fix the policy)
- {N} data issues (fix seed/setup)
Do NOT modify any files. Analysis only.
## Test Results — All Tests
**Run at:** {timestamp}
**Duration:** {duration_ms}ms
**Command:** `npx vitest run`
### Summary
| Metric | Count |
|--------|-------|
| Test Suites | {total_suites} |
| Passed Suites | {passed_suites} |
| Failed Suites | {failed_suites} |
| Total Tests | {total_tests} |
| Passed | {passed} |
| Failed | {failed} |
| Skipped | {skipped} |
| **Pass Rate** | **{pass_rate}** |
### Results by Directory
| Directory | Tests | Passed | Failed | Status |
|-----------|-------|--------|--------|--------|
| tests/features/ | {tests} | {passed} | {failed} | {pass/fail} |
| tests/rls/ | {tests} | {passed} | {failed} | {pass/fail} |
| tests/constraints/ | {tests} | {passed} | {failed} | {pass/fail} |
| supabase/ | {tests} | {passed} | {failed} | {pass/fail} |
### Results by Phase
| Phase | Features | Tests | Passed | Failed |
|-------|----------|-------|--------|--------|
| Phase 1: Auth | auth | {tests} | {passed} | {failed} |
| Phase 2: Orders | orders, payments | {tests} | {passed} | {failed} |
### Failures (if any)
| Test | File | Root Cause | Fix |
|------|------|------------|-----|
| {test name} | {file}:{line} | {cause} | {fix recommendation} |
### Output
- **test.result.json:** `docs/test.result.json` (replaced)
### Next Steps
1. Fix failing tests using the failure analysis above
2. Re-run: `/run-test-all` to verify fixes
3. Run specific phase: `/run-test-phase` to focus on one area
4. Generate more tests: `/generate-test {feature}`