From dev-workflow
Use when you need to create a verification prompt with falsifiable assertions for a plan, code change, or session response. Generates structured prompts that enable rigorous verification.
npx claudepluginhub n0rvyn/indie-toolkit --plugin dev-workflowThis skill uses the workspace's default tool permissions.
Generate a VF/DF verification prompt based on actual code analysis. The user copies the output and pastes it into the target session (e.g., a native `/plan` session, an execution session, or any other conversation).
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Generate a VF/DF verification prompt based on actual code analysis. The user copies the output and pastes it into the target session (e.g., a native /plan session, an execution session, or any other conversation).
Theoretical basis: verification (backward reasoning) has lower cognitive load than generation (forward reasoning), and the resulting reasoning paths are complementary to forward chain-of-thought. Critiquing external input overcomes egocentric bias. (Wu & Yao, "Asking LLMs to Verify First is Almost Free Lunch", 2025)
This skill only generates prompts. It does not execute verification.
Determine what to verify. The user provides one of:
/plan output, docs/06-plans/*.md, or any plan document--diff flag — use git diff for code change verificationIf the input is a plan file, check its header for a Design doc: reference. If found, treat as "design doc exists".
If unclear what to verify, ask the user.
Auto-detect content type and verification focus — do not ask the user:
| Signal | Determination |
|---|---|
| Input is a plan file | Stage = plan verification |
| Design doc exists (provided or found in plan header) | Stage += design faithfulness |
Git diff has content + --diff flag | Stage = implementation verification |
| Plan/change involves View, UI, layout, styling | Content = UI |
| Plan/change involves Service, Agent, Tool, data flow, event entry points | Content = architecture |
| Plan/change has >= 5 steps with inter-step dependencies | Content = multi-step |
| Plan/change modifies existing code behavior | Content = behavioral change |
Auto-select applicable strategies (can overlap):
| Context | VF Strategies | DF Strategies |
|---|---|---|
| Plan + behavioral change | S1 | — |
| Plan + architecture | S1 + S3 | — |
| Plan + UI | S1 + U1 + U2 | — |
| Plan + multi-step | S2 | — |
| Plan + design doc exists | S1 | D1 + selected D2-D5 |
Code change (--diff) | R1 | — |
DF sub-strategy selection (when design doc exists):
Critical rule: Every assertion must reference actual file paths, function names, or line numbers from the code read in this step. Generic placeholders like "some file" or "a function" are prohibited.
Output the assembled prompt in this format:
## VF Prompt
**Input**: {description of what was analyzed}
**Design doc**: {path or "none"}
**Strategies**: {list of selected strategy codes, e.g., S1 + S2 + D1}
---
**Copy the following and paste into the target session:**
```
{assembled prompt with all assertions}
```
Each template below shows the structure. When generating, replace all {...} with content derived from actual code analysis.
Trigger: plan + behavioral change (always applicable to plans).
An external reviewer examined this plan against the codebase. Verify each claim (cite file:line), fix the plan if confirmed:
1. Step {N}'s {specific operation} will break {file}'s {function} because {reason from actual code reading}
2. Step {N} assumes {precondition}, but {file:line} shows this precondition is not established by any prior step
3. Step {N} modifies {file}, but {other_file:line} also depends on the same {entity} and is not addressed in the plan
After verifying the above, check if similar issues exist elsewhere in the codebase.
Assertion dimensions (pick the weakest 3-5 areas of the plan):
Trigger: plan + multi-step (>= 5 steps with dependencies).
Assume this plan has been fully executed:
1. Build failed. Given step dependencies, the most likely compile error is: {specific error, e.g., "Step 3 adds a Protocol requirement that Step 5's class must conform to, but Step 5 runs after Step 3 without adding the conformance"}. Which file reports the error? Is this covered by the plan?
2. Build passed, but a user performing {core operation from plan} hits a regression. Trace backward: {user action} → {code entry point file:line} → {failure point file:line}. The most likely regression is: {specific scenario}. Is this covered by the plan?
For each, cite the specific plan step. Uncovered items need plan updates.
Trigger: plan + architecture changes.
A tech lead familiar with this codebase reviewed this plan:
1. "{new entry point from plan} and the existing {old_entry_file:line} are two independent paths calling {core_function}. The plan doesn't describe a coordination mechanism."
2. "{replaced component} still has references at {file:line} and {file:line}. The plan's replacement checklist is incomplete."
3. "Step {N} adds a new {field/enum case} to {Model}, but {consumer_file:line}'s switch/if doesn't handle this new value."
Verify each (cite file:line). Confirmed items need plan updates.
Trigger: plan + UI.
Verify all UI steps in this plan:
1. List every size, spacing, color, and font value each step will use
2. Check each against {DesignTokens file path}
3. Compare against {existing similar component path}'s patterns — identify inconsistencies
Output: | Step | UI Value | Token | Status |
Missing or inconsistent items need plan updates.
Trigger: plan + UI.
Assume this plan is implemented. A user opens the screen with:
- Dynamic Type at AX5 (maximum accessibility size)
- Dark Mode
- iPhone SE (smallest screen)
- {data condition from plan, e.g., "list has 50 items" or "text is 500 Chinese characters"}
Walk through each UI component in the plan: which overflows? which text truncates? which spacing collapses to unusable? Cite specific plan steps and propose fixes.
Trigger: code change (--diff flag).
These code changes are complete (from git diff):
{key change summary: filename + what changed, 1 line per file}
A user performs {core operation affected by changes} but hits a runtime bug (not a compile error).
1. Most likely bug? Trace backward from user action to the changed code lines.
2. Do changes affect other features that depend on the same data/state but weren't modified?
3. Cold start (first launch, no existing data) vs hot path (existing data) — do changes behave consistently?
Cite specific file:line for each.
All DF strategies require a design document. If no design doc exists, skip all DF strategies.
Trigger: design doc exists (always selected when DF applies).
A requirements analyst traced this plan against the design document. Verify each claim, then update the plan:
[Completeness: Design → Plan]
1. Design doc {section/paragraph ref} requires "{specific requirement text}", but no plan step covers this
2. Design doc {section} specifies {value/parameter/schema}, but the plan doesn't mention this value
[Fidelity: Plan → Design]
3. Plan step {N}'s {operation} reinterprets design doc {section}'s requirement — design says "{quote}", plan says "{plan text}"
4. Plan step {N} introduces {decision} that changes the design's intended {behavior/flow}
After verifying, build a complete bidirectional trace table:
| Design Requirement | Plan Step | Status (covered / missing / deviated) |
Trigger: plan contains decisions not obviously derived from design.
A requirements analyst flagged plan decisions with no design doc basis. For each, verify (cite design doc section) whether it's "design left blank — ask user" or "design already specified — plan ignored it":
1. Plan step {N} decided {specific decision}, but design doc {section} does not authorize this. Design says: "{quote}"
2. Plan step {N} introduces {tech choice / interaction / default value} not covered in design
3. {assertion}
For each unauthorized decision, label:
- Design left blank → ask user before implementing
- Design already specified → update plan to match design
- Reasonable implementation detail → keep but label "implementation decision"
Trigger: design doc has preconditions, constraints, or business rules.
A requirements analyst extracted implicit assumptions from the design doc that the plan doesn't address. Verify each:
[Implicit Preconditions]
1. Design doc {section}'s "{text}" implies precondition: {extracted assumption}. No plan step establishes this.
2. {assertion}
[Business Rules]
3. Design doc {section} describes rule "{rule content}", but plan step {N}'s implementation doesn't account for it.
4. {assertion}
After verification, list all implicit assumptions from the design doc and confirm plan coverage for each.
Trigger: plan step count differs significantly from design requirement count.
A requirements analyst compared step granularity between design doc and plan. Verify each:
[Improper Merging]
1. Design doc {section} describes {A → B → C} as three stages. Plan merges them into step {N}, losing {constraint/checkpoint from stage B}.
2. {assertion}
[Improper Splitting]
3. Design doc {section} treats {X} as an atomic operation. Plan splits it into steps {N} and {M}, introducing intermediate state "{description}" not considered in design.
For each granularity mismatch: what constraint is lost or what risk is introduced?
Trigger: design doc mentions edge cases, error flows, or extreme conditions.
A requirements analyst extracted boundary scenarios from the design doc that the plan doesn't handle. Verify each:
1. Design doc {section} mentions {boundary scenario}, but plan step {N} only handles the happy path
2. Design doc {section}'s {requirement} under {extreme condition} — behavior undefined in plan
3. {assertion}
After verification, check if the design doc contains other boundary/error/edge scenarios not covered by the plan.