From outcome-engineering
Reframe implementation-focused tasks into measurable outcomes with verification criteria, cost justification, and risk gates. Use for significant features, build-vs-buy, or failed approaches.
npx claudepluginhub sam-dumont/claude-skills --plugin outcome-engineeringThis skill uses the workspace's default tool permissions.
Based on the [o16g manifesto](https://o16g.com/) by Cory Ondrejka. Software engineering is outcome delivery, not code production.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Based on the o16g manifesto by Cory Ondrejka. Software engineering is outcome delivery, not code production.
| Task Thinking | Outcome Thinking |
|---|---|
| "Build feature X" | "Enable users to achieve Y, verified by Z" |
| "Fix bug B" | "Restore correct behavior, prevent recurrence" |
| "Refactor module M" | "Reduce change cost from X to Y, measured by Z" |
| "Add technology T" | "Solve problem P with measurable improvement" |
Before starting significant work, produce an Outcome Frame. This takes ~3 minutes, not 30. Skip it for trivial tasks (typo fixes, obvious single-line bugs).
State the measurable change being delivered. Not what you'll build — what will be different when you're done.
Anti-pattern: "Build a REST API for user management" Outcome-first: "External services can create, read, update, and delete user records over HTTP, with <200ms p95 latency"
Ask: "What is measurably different when this is done?"
How will you PROVE it worked? Not "it looks right" — observable, repeatable evidence.
Verification must be concrete:
Anti-pattern: "Test it manually and make sure it works" Outcome-first: "Integration test hits /users CRUD endpoints, asserts 2xx responses and correct payloads. Load test confirms p95 <200ms at 100 RPS."
Ask: "What specific evidence proves this outcome was achieved?"
Is this worth the compute/effort? Not everything that could be built should be built.
Consider:
Anti-pattern: "It's in the backlog so we should do it" Outcome-first: "Users currently can't reset passwords without emailing support (3 tickets/day). Self-service reset eliminates this entirely."
Ask: "Is this worth the effort? What happens if we don't do it?"
What unknowns could derail this? Unknown or unmitigated risk blocks progress — surface it now, not after you've built the wrong thing.
Risk categories:
Anti-pattern: "We'll figure out the edge cases as we go" Outcome-first: "Risk: We assume the OAuth provider supports PKCE. Gate: Verify PKCE support before building the auth flow."
Ask: "What could make this fail? What should we verify before committing?"
When uncertain, frame work as an experiment. What are you testing? What's the cheapest way to validate it?
Anti-pattern: Build the full feature, then discover the approach doesn't work Outcome-first: "Hypothesis: Server-sent events can deliver real-time updates at our scale. Experiment: Spike a minimal SSE endpoint, load test at 1000 concurrent connections. If it fails, evaluate WebSockets before building the full notification system."
Ask: "What assumption are we testing? What's the cheapest way to validate it?"
After working through the five questions, produce:
## Outcome Frame
**Outcome**: [What is measurably different when done]
**Verification**: [Specific evidence that proves success]
**Cost justification**: [Why this is worth doing now]
**Risk gates**: [What to verify before committing]
**Hypothesis**: [What assumption we're testing, if uncertain] (optional — only when there's genuine uncertainty)
If the user provides enough context that answers are obvious, fill in what's clear and only ask about what's genuinely uncertain.
During work, apply these lightweight checks. They don't slow you down — they prevent you from building the wrong thing fast.
Read before writing. Understand before changing.
Before modifying any code:
When the urge strikes to "just start coding," that's exactly when you need 2 minutes of context-gathering.
When uncertain, build the cheapest experiment first.
If you're not sure an approach will work:
The goal is to fail fast and cheap, not to fail slow and expensive.
Document reasoning and rejected paths.
When making non-obvious decisions:
This isn't bureaucratic overhead — it's future debugging information. When something goes wrong, "why did we do it this way?" has an answer.
Before irreversible actions, stop and check.
Before any action that's hard to undo:
Ask: "If this goes wrong, how do we recover?"
When work completes, validate against the Outcome Frame.
Go back to the Outcome Frame and check each verification criterion:
Don't claim "done" on vibes. Show evidence.
If the outcome wasn't achieved:
Don't just debug the code — debug the decision.
This prevents the pattern of fixing symptoms while the root cause keeps producing new bugs.
When fixing a bug or failure:
A fix without prevention is just a temporary patch.
Not every task needs an Outcome Frame. Typo fixes, obvious bugs, simple config changes — just do them. The frame is for significant work where "what does done look like?" isn't immediately obvious.
Not analysis paralysis. The Outcome Frame takes 3 minutes, not 30. If you're spending more time framing than building, you're doing it wrong. When in doubt, bias toward action with a lightweight frame.
Not "build everything from scratch." Build to Learn means test hypotheses cheaply — use existing tools, libraries, and patterns. The experiment is about validating the approach, not reinventing infrastructure.
Not ignoring engineering discipline. Tests, code review, CI/CD, security — these are the verification mechanisms that make outcome engineering work. Without them, "verified reality" is just "hoped-for reality."
Not a replacement for existing skills. Outcome engineering is a framing layer. It wraps around brainstorming, architecture, implementation — it adds "why" and "what outcome" without replacing "how."
| Phase | Key Question | Time |
|---|---|---|
| Frame | What is measurably different when done? | ~3 min |
| Execute | Am I building toward the outcome or just writing code? | Continuous |
| Validate | Can I show evidence the outcome was achieved? | ~2 min |
| Principle | One-Liner |
|---|---|
| P01 Human Intent | Define the destination before exploring paths |
| P02 Verified Reality | Measurable evidence, not vibes |
| P04 Backlog Dead | Is this worth the compute? |
| P06 Context First | Read before writing |
| P07 Build to Learn | Cheapest experiment to validate |
| P08 Debug Decisions | Wrong outcome? Debug the decision, not just the code |
| P13 Show Work | Document reasoning and rejected paths |
| P14 Immune System | Fix the bug, then prevent the category |
| P15 Risk Gates | Unknowns block progress — surface them early |
| P16 Validation | Continuously verify against reality |