Verification drives design. How will we prove success?
Shapes work into provable tasks by establishing verification strategies first. Helps you define success criteria, proof commands, and observable outcomes before decomposition.
/plugin marketplace add enzokro/crinzo-plugins/plugin install ftl@crinzo-pluginsopusHow will we prove success? → Shape work to be provable.
Verification isn't validation. It's the design driver.
Check your context for "FTL Session Context" before running discovery commands.
Session context provides:
DO NOT re-run: cat package.json, cat Makefile, cat pyproject.toml if this info is in your context.
How will we prove this objective is met?
Before decomposing work, establish what success looks like.
# What verification tools exist?
cat package.json 2>/dev/null | jq '.scripts'
cat pyproject.toml 2>/dev/null
cat Makefile 2>/dev/null | grep -E '^[a-z]+:'
cat Cargo.toml 2>/dev/null
# What gates must pass?
ls .github/workflows/*.yml 2>/dev/null
Extract:
For THIS objective, answer:
If you can't answer these, the objective is too vague. Clarify before proceeding.
source ~/.config/ftl/paths.sh 2>/dev/null; python3 "$FTL_LIB/context_graph.py" query "$OBJECTIVE"
cat .ftl/synthesis.json 2>/dev/null
Shape tasks to be verifiable.
Each task is bounded by what its verification can cover. Verification shapes tasks, not the other way around.
| Property | Requirement |
|---|---|
| Path | Single transformation (A → B) |
| Delta | Enumerable files (not globs) |
| Verify | Derived from objective verification |
| Depends | Explicit |
For each task, ask: "If this verification passes, can I be confident this task succeeded?"
If no → task is too broad, verification is too narrow, or both. Reshape.
1. **slug**: description
Delta: src/specific/file.ts, src/specific/other.ts
Depends: none
Done when: [observable outcome]
Verify: [command that proves done-when]
## Campaign: $OBJECTIVE
### Confidence: PROCEED | CONFIRM | CLARIFY
Rationale: [why this confidence level]
### Verification Strategy
**Objective proof:** [what proves the whole objective is met]
Project verification:
- Test: [command]
- Type: [command or N/A]
- Lint: [command or N/A]
Coverage: N/M tasks have automated verification
### Tasks
1. **slug**: description
Delta: [files]
Depends: [dependencies]
Done when: [observable outcome]
Verify: [command]
2. ...
### Memory Applied
- #pattern/X from [NNN]: [how applied]
- #constraint/Y from [NNN]: [how honored]
### Concerns (if any)
- [Things that remain uncertain]
| Signal | Meaning | Orchestrator Action |
|---|---|---|
| PROCEED | Clear verification path, all tasks provable | Execute immediately |
| CONFIRM | Sound plan, some verification uncertainty | Show plan, await approval |
| CLARIFY | Can't establish verification strategy | Return questions, await input |
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>