npx claudepluginhub reinamaccredy/maestro --plugin maestroThis skill uses the workspace's default tool permissions.
Run user testing validation during mission checkpoints. Determines testable assertions, sets up test environment, spawns flow validators, and synthesizes results.
Runs code scrutiny validation during mission checkpoints by spawning review subagents, synthesizing results, updating validation state, and producing reports for milestones.
Executes end-user verification tests on real infrastructure using Setup/Action/Assert sequences. Classifies CLI/GUI/SUBJECTIVE tasks for auto-approval or human checkpoints with evidence capture.
Executes E2E validation workflows: reads specs, plans tests, runs automated/UI cases with screenshots and console checks, produces pass/fail reports with evidence. Use for QA, acceptance testing, release verification.
Share bugs, ideas, or general feedback.
Run user testing validation during mission checkpoints. Determines testable assertions, sets up test environment, spawns flow validators, and synthesizes results.
$ARGUMENTS
<milestone>: Validate a specific milestone (e.g., bootstrap, core)--all: Validate all milestones--flows: Comma-separated list of specific flows to testUse during Mission Control validation checkpoints when:
maestro:agent-base instructs final validationmaestro:agent-base - For startup/cleanup proceduresmaestro:scrutiny-validator - Often runs in sequence (code review first, then user testing)Check current validation status:
maestro validation show
Read validation-contract.md to identify testable assertions:
fulfills field references contract assertionsFrom the validation contract, identify assertions that:
Example assertions:
- **VAL-CLI-001**: `maestro mission list` outputs valid JSON with `--json`
- **VAL-FEAT-003**: Feature handoff creates agent artifact directory
- **VAL-INT-005**: Integration test passes with temp git repo
For CLI testing:
maestro mission create.maestro/bootstrap/services.yaml)For UI testing:
For API testing:
For each identified assertion:
Determine validation surface:
Spawn user-testing-flow-validator subagent:
Each flow validator executes:
interface FlowTest {
assertionId: string;
description: string;
command?: string; // CLI command to run
expectedOutput?: string; // Expected stdout/stderr pattern
expectedFiles?: string[]; // Files that should exist
expectedExitCode?: number; // Expected exit code (default 0)
}
Example flow execution:
# Test CLI JSON output
cd /tmp/test-repo
maestro mission list --json | jq '.missions'
# Verify: Returns valid JSON array
# Test file creation
maestro feature prompt feat-001 --mission m1 --out /tmp/p.md
# Verify: File /tmp/p.md exists and contains required sections
Collect all flow validator results:
interface UserTestingSynthesis {
milestone: string;
assertionsTested: number;
assertionsPassed: number;
assertionsFailed: number;
flows: Array<{
flowName: string;
status: "passed" | "failed" | "skipped";
assertions: string[];
duration: number;
}>;
blockers: Array<{
assertionId: string;
flowName: string;
expected: string;
actual: string;
}>;
}
Record results:
maestro validation update --milestone <name> --status <passed|failed>
Next step: return results to maestro:conduct. If status is passed and scrutiny validation also passed, the conductor proceeds to maestro milestone seal. If status is failed, the conductor re-opens the affected features for agent re-dispatch. Do not seal the milestone yourself -- that is the conductor's gate.
Human format:
=== User Testing Validation: <milestone> ===
Flows Tested: 3
Assertions Passed: 6/8
Flow Results:
✓ mission-creation-flow (2 assertions)
✗ feature-prompt-flow (1 failed)
- VAL-FEAT-003: Agent artifact not created
Expected: .maestro/missions/{id}/agents/...
Actual: Directory missing
Status: FAILED (1 blocking issue)
JSON format (with --json):
{
"milestone": "core",
"summary": {
"flowsTested": 3,
"assertionsPassed": 6,
"assertionsTotal": 8
},
"flows": [...],
"blockers": [...],
"status": "failed"
}
| Surface | Testing Approach |
|---|---|
| CLI Commands | Execute via Bun subprocess, verify output |
| File System | Verify file existence, read and validate content |
| HTTP APIs | curl/http client requests, verify response |
| UI/Browser | Browser automation, screenshot verification |
| Database | Query verification, state checks |
| Services | Healthcheck endpoints, status commands |
CLI Testing Pattern:
// Create temp repo
const tempDir = await mkdtemp(join(tmpdir(), 'test-'));
await exec(`git init ${tempDir}`);
// Run command and capture
const result = await exec(`cd ${tempDir} && maestro mission list --json`);
const output = JSON.parse(result.stdout);
// Verify
assert(Array.isArray(output.missions));
Service Testing Pattern:
// Start service
await exec('bun run src/index.ts server &');
await sleep(1000); // Wait for startup
// Healthcheck
const health = await fetch('http://localhost:3000/health');
assert(health.status === 200);
| Command | Purpose |
|---|---|
maestro validation show | Show current validation state |
maestro validation update | Update validation status |
maestro checkpoint save | Save checkpoint after testing |
maestro checkpoint list | List saved checkpoints |
User testing runs after scrutiny validation:
Feature Complete → Scrutiny Review → User Testing → Milestone Seal
↓ ↓
Blockers? Blockers?
↓ ↓
Return to fix Return to fix
Both must pass before maestro milestone seal succeeds.