From paad
Enforces TDD guardrails (red/green/refactor) for small fixes and quick changes in 1-3 files, checking test infrastructure, architecture smells, reusability, and task scope first.
npx claudepluginhub ovid/paad --plugin paadThis skill uses the workspace's default tool permissions.
Quick fixes and small changes with guardrails. You get the speed of vibe coding without the recklessness — mandatory TDD, architecture awareness, and reusable component detection.
Guides test-driven development for new features or bug fixes: decompose into behaviors, cycle Red (failing test) → Green (minimal code) → Refactor. Needs /optimus:init and test infrastructure.
Guides Test-Driven Development for features and bugfixes: RED (failing test first), GREEN (minimal code), REFACTOR. Preflight gate routes medium/high complexity to planning.
Enforces Test-Driven Development: write failing tests first, then minimal code to pass, refactor. For implementation tasks, bug fixes needing regression tests, behavior changes.
Share bugs, ideas, or general feedback.
Quick fixes and small changes with guardrails. You get the speed of vibe coding without the recklessness — mandatory TDD, architecture awareness, and reusable component detection.
digraph vibe {
"Task clear?" [shape=diamond];
"Test infrastructure?" [shape=diamond];
"Scope: how many files?" [shape=diamond];
"Architecture smell?" [shape=diamond];
"Reusable components found?" [shape=diamond];
"RED: test result?" [shape=diamond];
"Ask or clarify" [shape=box];
"ASK: set up tests or skip TDD?" [shape=box];
"WARN: may be bigger than a vibe task" [shape=box];
"STOP: investigate deeper issues" [shape=box, style=bold];
"Recommend using existing code" [shape=box];
"STOP: feature may exist or test is wrong" [shape=box, style=bold];
"STOP: unexpected failure, discuss" [shape=box, style=bold];
"Proceed to GREEN" [shape=box];
"Run pre-flight checks" [shape=box];
"Task clear?" -> "Run pre-flight checks" [label="yes"];
"Task clear?" -> "Ask or clarify" [label="no"];
"Ask or clarify" -> "Task clear?";
"Run pre-flight checks" -> "Test infrastructure?";
"Test infrastructure?" -> "Scope: how many files?" [label="yes"];
"Test infrastructure?" -> "ASK: set up tests or skip TDD?" [label="no"];
"ASK: set up tests or skip TDD?" -> "Scope: how many files?";
"Scope: how many files?" -> "Architecture smell?" [label="1-3 files"];
"Scope: how many files?" -> "WARN: may be bigger than a vibe task" [label="4+ files"];
"WARN: may be bigger than a vibe task" -> "Architecture smell?";
"Architecture smell?" -> "STOP: investigate deeper issues" [label="simple task, complex impl"];
"Architecture smell?" -> "Reusable components found?" [label="no smell"];
"Reusable components found?" -> "Recommend using existing code" [label="yes"];
"Reusable components found?" -> "RED: test result?" [label="no"];
"Recommend using existing code" -> "RED: test result?";
"RED: test result?" -> "STOP: feature may exist or test is wrong" [label="passes unexpectedly"];
"RED: test result?" -> "STOP: unexpected failure, discuss" [label="fails unexpectedly"];
"RED: test result?" -> "Proceed to GREEN" [label="fails as expected"];
}
/paad:vibe accepts optional $ARGUMENTS:
/paad:vibe — ask the user what needs fixing/paad:vibe fix the login timeout bug — start working on the described task immediately/paad:vibe src/components/Modal.tsx add close on escape key — task with a file hintWhen arguments are provided, treat them as the task description. Still ask clarifying questions if the task is unclear.
If no $ARGUMENTS provided, ask: "What needs fixing or changing?"
Once you have a task description:
Before writing any code, check these. If any raise concerns, discuss with the user before proceeding.
Does this project have a test framework and runner? Look for:
test/, tests/, spec/, __tests__/)jest.config, vitest.config, pytest.ini, .rspec, phpunit.xml, Cargo.toml with [dev-dependencies], etc.)If no test infrastructure exists: tell the user. Ask: "There's no test setup in this project. Want me to set up a basic test framework first, or proceed without TDD?" If they choose to proceed without TDD, still follow GREEN and REFACTOR steps but skip RED.
Does the code being changed already have tests? If yes, note them — they'll inform your RED step and catch regressions. If no, that's fine but worth noting.
How many files and modules does this change likely touch?
If the task is conceptually simple (e.g., "only admin users can download finance reports") but investigation reveals it requires a lot of work to implement, stop. Investigate whether there are deeper architectural issues making the work harder than it should be. Discuss findings with the user before proceeding.
If the task involves common functionality — toast notifications, modals, form validation, error handling, data formatting, API calls, permission checks, logging, etc. — search the codebase first for:
If found: tell the user what you found and recommend using/extending the existing code rather than building from scratch.
This is mandatory. Follow it strictly.
Write a single test that defines the expected behavior for the change.
Run it. It should fail. If it doesn't:
Only proceed to GREEN when the test fails in the expected way.
Write the simplest code that makes the failing test pass. Resist the urge to:
Run the test. It should pass. Run all existing tests in the affected area too — make sure nothing broke.
Now clean up. This is the step AI skips unless told to, and it's where real quality comes from. Look for:
Run all tests after refactoring. Everything must stay green.
If the task involves multiple behaviors, repeat the red/green/refactor cycle for each. One test at a time, one behavior at a time.
After the fix is complete, provide a brief summary:
Suggest paad skills when the change warrants it. Don't suggest follow-ups for trivial fixes.
/paad:agentic-review before merging — this touched security-sensitive code."/paad:agentic-a11y src/path/to/changed/files to check accessibility."/paad:agentic-architecture to investigate whether there are deeper structural issues."