Use before claiming any task is complete. Performs a structured self-check to ensure all acceptance criteria are met. Triggers: "I'm done", "that's complete", "finished", before reporting DONE.
From superomninpx claudepluginhub wilder1222/superomni --plugin superomniThis skill is limited to using the following tools:
SKILL.md.tmplSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
mkdir -p ~/.omni-skills/sessions
_PROACTIVE=$(~/.claude/skills/superomni/bin/config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_TEL_START=$(date +%s)
echo "Branch: $_BRANCH | PROACTIVE: $_PROACTIVE"
If PROACTIVE is false: do NOT proactively suggest skills. Only run skills the
user explicitly invokes. If you would have auto-invoked, say:
"I think [skill-name] might help here — want me to run it?" and wait.
Report status using one of these at the end of every skill session:
Pipeline stage order: THINK → PLAN → REVIEW → BUILD → VERIFY → SHIP → REFLECT
REVIEW is the only human gate. All other stages auto-advance on DONE.
| Status | At REVIEW stage | At all other stages |
|---|---|---|
| DONE | STOP — present review summary, wait for user input (Y / N / revision notes) | Auto-advance — print [STAGE] DONE → advancing to [NEXT-STAGE] and immediately invoke next skill |
| DONE_WITH_CONCERNS | STOP — present concerns, wait for user decision | STOP — present concerns, wait for user decision |
| BLOCKED / NEEDS_CONTEXT | STOP — present blocker, wait for user | STOP — present blocker, wait for user |
When auto-advancing:
docs/superomni/[STAGE] DONE → advancing to [NEXT-STAGE] ([skill-name])When the user sends a follow-up message after a completed session, before doing anything else:
ls docs/superomni/specs/spec-*.md docs/superomni/plans/plan-*.md docs/superomni/ .superomni/ 2>/dev/null | head -20
git log --oneline -3 2>/dev/null
To find the latest spec or plan:
_LATEST_SPEC=$(ls docs/superomni/specs/spec-*.md 2>/dev/null | sort | tail -1)
_LATEST_PLAN=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
workflow skill for stage → skill mapping) and announce:
"Continuing in superomni mode — picking up at [stage] using [skill-name]."using-skills/SKILL.md.When asking the user a question, match the confirmation requirement to the complexity of the response:
| Question type | Confirmation rule |
|---|---|
| Single-choice — user picks one option (A/B/C, 1/2/3, Yes/No) | The user's selection IS the confirmation. Do NOT ask "Are you sure?" or require a second submission. |
| Free-text input — user types a value and presses Enter | The submitted text IS the confirmation. No secondary prompt needed. |
| Multi-choice — user selects multiple items from a list | After the user lists their selections, ask once: "Confirm these selections? (Y to proceed)" before acting. |
| Complex / open-ended discussion — back-and-forth clarification | Collect all input, then present a summary and ask: "Ready to proceed with the above? (Y/N)" before acting. |
Rule: never add a redundant confirmation layer on top of a single-choice or text-input answer.
Custom Input Option Rule: Whenever you present a predefined list of choices (A/B/C, numbered options, etc.), always append a final "Other" option that lets the user describe their own idea:
[last letter/number + 1]) Other — describe your own idea: ___________
When the user selects "Other" and provides their custom text, treat that text as the chosen option and proceed exactly as you would for any other selection. If the custom text is ambiguous, ask one clarifying question before proceeding.
Load context progressively — only what is needed for the current phase:
| Phase | Load these | Defer these |
|---|---|---|
| Planning | Latest docs/superomni/specs/spec-*.md, constraints, prior decisions | Full codebase, test files |
| Implementation | Latest docs/superomni/plans/plan-*.md, relevant source files | Unrelated modules, docs |
| Review/Debug | diff, failing test output, minimal repro | Full history, specs |
If context pressure is high: summarize prior phases into 3-5 bullet points, then discard raw content.
All skill artifacts are written to docs/superomni/ (relative to project root).
See the Document Output Convention in CLAUDE.md for the full directory map.
Agent failures are harness signals — not reasons to retry the same approach:
harness-engineering skill to update the harness before retrying.It is always OK to stop and say "this is too hard for me." Escalation is expected, not penalized.
After completing any skill session, run a 3-question self-check before writing the final status:
If any answer is NO, address it before reporting DONE. If it cannot be addressed, report DONE_WITH_CONCERNS and name the gap.
For a full performance evaluation spanning the entire sprint, use the self-improvement skill.
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
~/.claude/skills/superomni/bin/analytics-log "SKILL_NAME" "$_TEL_DUR" "OUTCOME" 2>/dev/null || true
Nothing is sent to external servers. Data is stored only in ~/.omni-skills/analytics/.
Goal: Systematically verify that work is complete and correct before declaring done.
"I think it works" is not evidence. "I believe it's correct" is not evidence. Evidence is: running the code and showing output, passing test results, or observable behavior.
Run through this before reporting any status:
Before any technical checks, verify the output achieves what the user originally asked for.
# Read acceptance criteria from spec or plan
_SPEC=$(ls docs/superomni/specs/spec-*.md 2>/dev/null | sort | tail -1)
_PLAN=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
cat "$_SPEC" 2>/dev/null | grep -A 30 "Acceptance Criteria" | head -40 || \
cat "$_PLAN" 2>/dev/null | grep -A 20 "Success Criteria" | head -30 || \
echo "No docs/superomni/specs/spec-*.md or docs/superomni/plans/plan-*.md found"
For each acceptance criterion in docs/superomni/specs/spec-.md or docs/superomni/plans/plan-.md:
| Criterion | Met? | Evidence |
|---|---|---|
| [criterion from spec] | ✓/✗ | [specific proof: test output, observable behavior, or code reference] |
If no docs/superomni/specs/spec-*.md exists:
Gate: Cannot report DONE if any P0 acceptance criterion is unmet.
# Run tests
npm test 2>&1 | tail -20
# or
pytest -v 2>&1 | tail -20
# or
go test ./... 2>&1 | tail -20
Hard gate for new code: If new source code was written and no tests exist for it, report BLOCKED — do not advance to DONE until tests are added. The only valid exception is a documented reason (pure UI layout, throw-away prototype).
# Step 1: List source files changed (exclude tests and docs)
git diff HEAD --name-only | grep -vE "(test|spec|\.md$|\.txt$)" | head -10
# Step 2: List test files changed
git diff HEAD --name-only | grep -E "(test|spec|_test\.|\.test\.)" | head -10
# Step 3: Check if any source file has a corresponding test file
# For each changed source file, search for a test file by base name
for f in $(git diff HEAD --name-only | grep -vE "(test|spec|\.md$)"); do
base=$(basename "$f" | sed 's/\..*//')
found=$(find . -name "*${base}*test*" -o -name "*${base}*spec*" -o \
-name "test_*${base}*" 2>/dev/null | head -1)
if [ -z "$found" ]; then
echo "MISSING TESTS: $f (no test file found for '$base')"
else
echo "HAS TESTS: $f → $found"
fi
done
# Run full test suite (not just new tests)
npm test 2>&1 | grep -E "(PASS|FAIL|Error)" | head -20
git diff HEAD# Quick diff review
git diff HEAD --stat
git diff HEAD | grep "console.log\|debugger\|TODO\|FIXME\|print(" | head -10
git diff HEAD --stat | tail -1)After completing the checklist:
VERIFICATION REPORT
════════════════════════════════════════
Task: [what was being implemented/fixed]
Tests run: [N tests, N passing, N failing]
Goal Alignment:
Spec/plan used: [docs/superomni/specs/spec-*.md | docs/superomni/plans/plan-*.md | user request]
✓/✗ [acceptance criterion 1] — [evidence]
✓/✗ [acceptance criterion 2] — [evidence]
User goal achieved: YES | PARTIAL | NO
Acceptance criteria:
✓ [criterion 1]
✓ [criterion 2]
✗ [criterion 3] — FAILED (explain why)
Files changed: [N files]
Regressions: [none | list any]
Evidence: [test output snippet or observed behavior]
Status: DONE | DONE_WITH_CONCERNS | BLOCKED
Concerns (if any):
- [concern 1 with recommendation]
════════════════════════════════════════
After completing verification, save the report as a persistent Markdown document:
EVAL_DIR="docs/superomni/evaluations"
mkdir -p "$EVAL_DIR"
BRANCH=$(git branch --show-current 2>/dev/null | tr '/' '-' || echo "main")
TIMESTAMP=$(date +%Y-%m-%d-%H%M%S)
EVAL_FILE="$EVAL_DIR/evaluation-${BRANCH}-${TIMESTAMP}.md"
Write the full VERIFICATION REPORT block (including all checklist results, test output, and goal alignment table) to $EVAL_FILE in this format:
# Verification Evaluation: [branch]
**Date:** [date]
**Branch:** [branch]
**Task:** [what was being verified]
## Checklist Results
| Check | Result | Notes |
|-------|--------|-------|
| Functional verification | ✓/✗ | |
| Test verification | ✓/✗ | |
| Regression verification | ✓/✗ | |
| Completeness | ✓/✗ | |
| No regressions | ✓/✗ | |
| Blast radius | ✓/✗ | |
## Goal Alignment
Spec/plan used: [docs/superomni/specs/spec-*.md | docs/superomni/plans/plan-*.md | user request]
| Criterion | Met? | Evidence |
|-----------|------|----------|
| [criterion 1] | ✓/✗ | [proof] |
## Evidence
[Test output snippet or observed behavior]
## Verdict
[Paste full VERIFICATION REPORT block here]
**Status:** DONE | DONE_WITH_CONCERNS | BLOCKED
echo "Evaluation saved to $EVAL_FILE"
This file is the permanent task evaluation record. It feeds into self-improvement and future sprint retrospectives.
If any check fails: