Multi-agent parallel E2E validation for database refactors. TRIGGERS - E2E validation, schema migration testing, database refactor validation.
From quality-toolsnpx claudepluginhub terrylica/cc-skills --plugin quality-toolsThis skill is limited to using the following tools:
references/agent_test_template.pyreferences/bug_severity_classification.mdreferences/evolution-log.mdreferences/example_validation_findings.mdImplements CQRS patterns with Python templates for command/query separation, event-sourcing, and scalable read/write models. Use for optimizing queries or independent scaling.
Implements Clean Architecture, Hexagonal Architecture (ports/adapters), and Domain-Driven Design for backend services. For microservice design, monolith refactoring to bounded contexts, and dependency debugging.
Provides REST and GraphQL API design principles including resource hierarchies, HTTP methods, versioning strategies, pagination, and filtering patterns for new APIs, reviews, or standards.
Self-Evolving Skill: This skill improves through use. If instructions are wrong, parameters drifted, or a workaround was needed — fix this file immediately, don't defer. Only update for real, reproducible issues.
Prescriptive workflow for spawning parallel validation agents to comprehensively test database refactors. Successfully identified 5 critical bugs (100% system failure rate) in QuestDB migration that would have shipped in production.
Use this skill when:
Key outcomes:
Layer 1: Environment Setup
Layer 2: Data Flow Validation
Layer 3: Query Interface Validation
Sequential vs Parallel Execution:
Agent 1 (Environment) → [SEQUENTIAL - prerequisite]
↓
Agent 2 (Bulk Loader) → [PARALLEL with Agent 3]
Agent 3 (Query Interface) → [PARALLEL with Agent 2]
Dependency Rule: Environment validation must pass before data flow/query validation
Dynamic Todo Management:
Each agent produces:
test_bulk_loader.py)
Example Test Structure:
def test_feature(conn):
"""Test 1: Feature description"""
print("=" * 80)
print("TEST 1: Feature description")
print("=" * 80)
results = {}
# Test 1a: Subtest name
print("\n1a. Testing subtest:")
result_1a = perform_test()
print(f" Result: {result_1a}")
results["subtest_1a"] = result_1a == expected_1a
# Summary
print("\n" + "-" * 80)
all_passed = all(results.values())
print(f"Test 1 Results: {'✓ PASS' if all_passed else '✗ FAIL'}")
for test_name, passed in results.items():
print(f" - {test_name}: {'✓' if passed else '✗'}")
return {"success": all_passed, "details": results}
Severity Levels:
Bug Report Format:
#### Bug N: Descriptive Name (**SEVERITY** - Status)
**Location**: `file/path.py:line`
**Issue**: One-sentence description
**Impact**: Quantified impact (e.g., "100% ingestion failure")
**Root Cause**: Technical explanation
**Fix Applied**: Code changes with before/after
**Verification**: Test results proving fix
**Status**: ✅ FIXED / ⚠️ PARTIAL / ❌ OPEN
Go/No-Go Criteria:
BLOCKER = Any Critical bug unfixed
SHIP = All Critical bugs fixed + (Medium bugs acceptable OR fixed)
DEFER = >3 Medium bugs unfixed OR any High-severity bug
Example Decision:
Input: ADR document (e.g., ADR-0002 QuestDB Refactor) Output: Validation plan with 3-7 agents
Plan Structure:
## Validation Agents
### Agent 1: Environment Setup
- Deploy QuestDB via Docker
- Apply schema.sql
- Validate connectivity (ILP, PG, HTTP)
- Create .env configuration
### Agent 2: Bulk Loader Validation
- Test CloudFront → QuestDB ingestion
- Benchmark performance (target: >100K rows/sec)
- Validate deduplication (re-ingestion test)
- Multi-month ingestion test
### Agent 3: Query Interface Validation
- Test get_latest() with various limits
- Test get_range() with date boundaries
- Test execute_sql() with parameterized queries
- Test detect_gaps() SQL compatibility
- Test error handling (invalid inputs)
Directory Structure:
tmp/e2e-validation/
agent-1-env/
test_environment_setup.py
questdb.log
config.env
schema-check.txt
Validation Checklist:
Agent 2: Bulk Loader
tmp/e2e-validation/
agent-2-bulk/
test_bulk_loader.py
ingestion_benchmark.txt
deduplication_test.txt
Agent 3: Query Interface
tmp/e2e-validation/
agent-3-query/
test_query_interface.py
gap_detection_test.txt
Execution:
# Terminal 1
cd tmp/e2e-validation/agent-2-bulk
uv run python test_bulk_loader.py
# Terminal 2
cd tmp/e2e-validation/agent-3-query
uv run python test_query_interface.py
Template:
# E2E Validation Findings Report
**Validation ID**: ADR-XXXX
**Branch**: feat/database-refactor
**Date**: YYYY-MM-DD
**Target Release**: vX.Y.Z
**Status**: [BLOCKED / READY / IN_PROGRESS]
## Executive Summary
E2E validation discovered **N critical bugs** that would have caused [impact]:
| Finding | Severity | Status | Impact | Agent |
| ------- | -------- | ------ | ------------ | ------- |
| Bug 1 | Critical | Fixed | 100% failure | Agent 2 |
**Recommendation**: [RELEASE READY / BLOCKED / DEFER]
## Agent 1: Environment Setup - [STATUS]
...
## Agent 2: [Name] - [STATUS]
...
For each bug:
fix: correct timestamp parsing in CSV ingestion)Example Fix Commit:
git add src/gapless_crypto_clickhouse/collectors/questdb_bulk_loader.py
git commit -m "fix: prevent pandas from treating first CSV column as index
BREAKING CHANGE: All timestamps were defaulting to epoch 0 (1970-01)
due to pandas read_csv() auto-indexing. Added index_col=False to
preserve first column as data.
Fixes #ABC-123"
Run all tests:
/usr/bin/env bash << 'SKILL_SCRIPT_EOF'
cd tmp/e2e-validation
for agent in agent-*; do
echo "=== Running $agent ==="
cd $agent
uv run python test_*.py
cd ..
done
SKILL_SCRIPT_EOF
Update VALIDATION_FINDINGS.md status:
Context: Migrating from file-based storage (v3.x) to QuestDB (v4.0.0)
Bugs Found:
Sender.from_uri() instead of Sender.from_conf()number_of_trades sent as FLOAT, schema expects LONGDEDUP ENABLE UPSERT KEYS)Impact: Without this validation, v4.0.0 would ship with 100% data corruption and 100% ingestion failure
Outcome: All 5 bugs fixed, system validated, v4.0.0 released successfully
❌ Bad: Assume Docker/database is working, jump to data ingestion tests ✅ Good: Agent 1 validates environment first, catches port conflicts, schema errors early
❌ Bad: Run Agent 2, wait for completion, then run Agent 3 ✅ Good: Run Agent 2 & 3 in parallel (no dependency between them)
❌ Bad: Copy/paste test output into Slack/email ✅ Good: Structured VALIDATION_FINDINGS.md with severity, status, fix tracking
❌ Bad: "Performance is 55% below SLO, but we'll fix it later" ✅ Good: Document in VALIDATION_FINDINGS.md, make explicit go/no-go decision
❌ Bad: Apply fix, assume it works, move on ✅ Good: Re-run failing test, update status in VALIDATION_FINDINGS.md
Not applicable - validation scripts are project-specific (stored in tmp/e2e-validation/)
example_validation_findings.md - Complete VALIDATION_FINDINGS.md templateagent_test_template.py - Template for creating validation test scriptsbug_severity_classification.md - Detailed severity criteria and examplesNot applicable - validation artifacts are project-specific
| Issue | Cause | Solution |
|---|---|---|
| Container not starting | Colima/Docker not running | Run colima start before Agent 1 |
| Port conflicts | Ports already in use | Stop conflicting containers or use different ports |
| Schema application fails | Invalid SQL syntax | Check schema.sql for database-specific compatibility |
| Agent 2/3 fail without Agent 1 | Environment not validated | Ensure Agent 1 completes before starting Agent 2/3 |
| Test script import errors | Missing dependencies | Run uv pip install in agent directory |
| Bug status not updating | VALIDATION_FINDINGS.md stale | Manually refresh status after each fix |
| Parallel agents interference | Shared resources conflict | Ensure agents use isolated directories |
| Decision unclear | Severity mixed Critical/Medium | Apply Go/No-Go criteria strictly per documentation |
After this skill completes, reflect before closing the task:
Do NOT defer. The next invocation inherits whatever you leave behind.