Use when auditing code or systems for security vulnerabilities. Combines OWASP Top 10 checklist with STRIDE threat modeling. Triggers: "security audit", "security review", "threat model", "vulnerability scan".
From superomninpx claudepluginhub wilder1222/superomni --plugin superomniThis skill is limited to using the following tools:
SKILL.md.tmplSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
mkdir -p ~/.omni-skills/sessions
_PROACTIVE=$(~/.claude/skills/superomni/bin/config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_TEL_START=$(date +%s)
echo "Branch: $_BRANCH | PROACTIVE: $_PROACTIVE"
If PROACTIVE is false: do NOT proactively suggest skills. Only run skills the
user explicitly invokes. If you would have auto-invoked, say:
"I think [skill-name] might help here — want me to run it?" and wait.
Report status using one of these at the end of every skill session:
Pipeline stage order: THINK → PLAN → REVIEW → BUILD → VERIFY → SHIP → REFLECT
REVIEW is the only human gate. All other stages auto-advance on DONE.
| Status | At REVIEW stage | At all other stages |
|---|---|---|
| DONE | STOP — present review summary, wait for user input (Y / N / revision notes) | Auto-advance — print [STAGE] DONE → advancing to [NEXT-STAGE] and immediately invoke next skill |
| DONE_WITH_CONCERNS | STOP — present concerns, wait for user decision | STOP — present concerns, wait for user decision |
| BLOCKED / NEEDS_CONTEXT | STOP — present blocker, wait for user | STOP — present blocker, wait for user |
When auto-advancing:
docs/superomni/[STAGE] DONE → advancing to [NEXT-STAGE] ([skill-name])When the user sends a follow-up message after a completed session, before doing anything else:
ls docs/superomni/specs/spec-*.md docs/superomni/plans/plan-*.md docs/superomni/ .superomni/ 2>/dev/null | head -20
git log --oneline -3 2>/dev/null
To find the latest spec or plan:
_LATEST_SPEC=$(ls docs/superomni/specs/spec-*.md 2>/dev/null | sort | tail -1)
_LATEST_PLAN=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
workflow skill for stage → skill mapping) and announce:
"Continuing in superomni mode — picking up at [stage] using [skill-name]."using-skills/SKILL.md.When asking the user a question, match the confirmation requirement to the complexity of the response:
| Question type | Confirmation rule |
|---|---|
| Single-choice — user picks one option (A/B/C, 1/2/3, Yes/No) | The user's selection IS the confirmation. Do NOT ask "Are you sure?" or require a second submission. |
| Free-text input — user types a value and presses Enter | The submitted text IS the confirmation. No secondary prompt needed. |
| Multi-choice — user selects multiple items from a list | After the user lists their selections, ask once: "Confirm these selections? (Y to proceed)" before acting. |
| Complex / open-ended discussion — back-and-forth clarification | Collect all input, then present a summary and ask: "Ready to proceed with the above? (Y/N)" before acting. |
Rule: never add a redundant confirmation layer on top of a single-choice or text-input answer.
Custom Input Option Rule: Whenever you present a predefined list of choices (A/B/C, numbered options, etc.), always append a final "Other" option that lets the user describe their own idea:
[last letter/number + 1]) Other — describe your own idea: ___________
When the user selects "Other" and provides their custom text, treat that text as the chosen option and proceed exactly as you would for any other selection. If the custom text is ambiguous, ask one clarifying question before proceeding.
Load context progressively — only what is needed for the current phase:
| Phase | Load these | Defer these |
|---|---|---|
| Planning | Latest docs/superomni/specs/spec-*.md, constraints, prior decisions | Full codebase, test files |
| Implementation | Latest docs/superomni/plans/plan-*.md, relevant source files | Unrelated modules, docs |
| Review/Debug | diff, failing test output, minimal repro | Full history, specs |
If context pressure is high: summarize prior phases into 3-5 bullet points, then discard raw content.
All skill artifacts are written to docs/superomni/ (relative to project root).
See the Document Output Convention in CLAUDE.md for the full directory map.
Agent failures are harness signals — not reasons to retry the same approach:
harness-engineering skill to update the harness before retrying.It is always OK to stop and say "this is too hard for me." Escalation is expected, not penalized.
After completing any skill session, run a 3-question self-check before writing the final status:
If any answer is NO, address it before reporting DONE. If it cannot be addressed, report DONE_WITH_CONCERNS and name the gap.
For a full performance evaluation spanning the entire sprint, use the self-improvement skill.
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
~/.claude/skills/superomni/bin/analytics-log "SKILL_NAME" "$_TEL_DUR" "OUTCOME" 2>/dev/null || true
Nothing is sent to external servers. Data is stored only in ~/.omni-skills/analytics/.
Goal: Systematically identify security vulnerabilities using OWASP Top 10 and STRIDE threat modeling, then produce an actionable report with severity ratings.
NEVER CLEAR A FINDING WITHOUT EVIDENCE.
"I don't think this is exploitable" is not evidence. Evidence is: a specific test showing the input is sanitized, a configuration proving the feature is disabled, or a code path demonstrating the guard clause exists.
Before looking for vulnerabilities, map what you're protecting.
# Identify entry points (APIs, CLI, UI)
grep -rn "app\.get\|app\.post\|app\.put\|app\.delete\|router\." . \
--include="*.js" --include="*.ts" --include="*.py" -l 2>/dev/null | head -20
# Find authentication/authorization code
grep -rn "auth\|login\|session\|token\|jwt\|password\|credential" . \
--include="*.js" --include="*.ts" --include="*.py" --include="*.go" \
-l 2>/dev/null | head -20
# Find data stores and connections
grep -rn "database\|mongoose\|sequelize\|prisma\|redis\|sql\|connect" . \
--include="*.js" --include="*.ts" --include="*.py" --include="*.go" \
-l 2>/dev/null | head -20
# Find configuration and secrets
find . -name "*.env*" -o -name "*.pem" -o -name "*.key" -o -name "*secret*" \
2>/dev/null | grep -v "node_modules\|.git" | head -20
Document the attack surface:
AUDIT SURFACE
─────────────────────────────────
Entry points: [list APIs, CLI, UI routes]
Auth boundaries: [where auth is checked]
Data flows: [input → processing → storage → output]
Trust boundaries: [internal vs. external, user vs. admin]
Sensitive data: [PII, credentials, tokens, financial]
─────────────────────────────────
Work through each category. For each, check the codebase and record findings.
# Check for missing auth middleware
grep -rn "app\.get\|app\.post\|router\." . --include="*.js" --include="*.ts" \
-A 2 | grep -v "auth\|protect\|middleware\|guard" | head -20
grep -rn "md5\|sha1\|DES\|RC4\|ECB" . \
--include="*.js" --include="*.ts" --include="*.py" --include="*.go" \
2>/dev/null | head -10
# SQL injection risk
grep -rn "exec\|raw\|query.*+" . --include="*.js" --include="*.ts" --include="*.py" \
| grep -v "node_modules" | head -20
# Command injection risk
grep -rn "exec(\|spawn(\|system(\|popen\|subprocess" . \
--include="*.js" --include="*.ts" --include="*.py" \
| grep -v "node_modules" | head -20
# Check for debug/dev flags in production configs
grep -rn "debug.*true\|DEBUG=1\|NODE_ENV.*development" . \
--include="*.env*" --include="*.yaml" --include="*.json" \
| grep -v "node_modules\|.git" | head -10
Covered in Phase 4 (Dependency Audit).
# Check for unsafe deserialization
grep -rn "pickle\.load\|yaml\.load\|eval(\|JSON\.parse.*unvalidated" . \
--include="*.py" --include="*.js" --include="*.ts" \
| grep -v "node_modules" | head -10
# Check for SSRF risk
grep -rn "fetch(\|axios\|request(\|urllib\|http\.get" . \
--include="*.js" --include="*.ts" --include="*.py" \
| grep -v "node_modules" | head -10
For each trust boundary identified in Phase 1, evaluate all six STRIDE categories:
| Threat | Question | Example |
|---|---|---|
| Spoofing | Can an attacker pretend to be someone else? | Forged auth tokens, session hijacking |
| Tampering | Can an attacker modify data they shouldn't? | Unsigned API payloads, unprotected DB writes |
| Repudiation | Can an attacker deny their actions? | Missing audit logs, no transaction records |
| Info Disclosure | Can an attacker access data they shouldn't? | Verbose errors, exposed stack traces, directory listing |
| Denial of Service | Can an attacker make the system unavailable? | Unbound queries, no rate limiting, resource exhaustion |
| Elevation of Privilege | Can an attacker gain higher access? | IDOR, privilege escalation, admin bypass |
For each finding:
STRIDE FINDING
─────────────────────────────────
Category: [S/T/R/I/D/E]
Location: [file:line or component]
Description: [what the threat is]
Exploitability: [how an attacker could use this]
Evidence: [code snippet or configuration proving the issue]
─────────────────────────────────
# Check for known CVEs in dependencies
npm audit 2>/dev/null || true
pip-audit 2>/dev/null || pip install pip-audit && pip-audit 2>/dev/null || true
go list -m -json all 2>/dev/null | head -50 || true
# Check dependency age and maintenance
npm outdated 2>/dev/null || pip list --outdated 2>/dev/null || true
# Verify lock files exist
ls -la package-lock.json yarn.lock pnpm-lock.yaml Pipfile.lock go.sum 2>/dev/null
For each vulnerable dependency:
DEPENDENCY FINDING
─────────────────────────────────
Package: [name@version]
CVE: [CVE-XXXX-XXXXX]
Severity: [CRITICAL/HIGH/MEDIUM/LOW]
Fix: [upgrade to version X.Y.Z | no fix available]
Exploitable: [yes — explain how | no — explain why not]
─────────────────────────────────
SECURITY AUDIT REPORT
════════════════════════════════════════
Scope: [what was audited]
Date: [YYYY-MM-DD]
Method: OWASP Top 10 + STRIDE + Dependency Audit
FINDINGS SUMMARY
────────────────────────────────────────
Critical: [N] — must fix before deploy
High: [N] — fix within 1 sprint
Medium: [N] — fix within 1 month
Low: [N] — track and address opportunistically
Info: [N] — no action required
CRITICAL FINDINGS
────────────────────────────────────────
[C1] [Title]
Location: [file:line]
Category: [OWASP/STRIDE category]
Description: [what is wrong]
Impact: [what could happen]
Remediation: [how to fix]
Evidence: [proof this is real]
HIGH FINDINGS
────────────────────────────────────────
[H1] [Title]
[same format as critical]
DEPENDENCY VULNERABILITIES
────────────────────────────────────────
[list from Phase 4]
AREAS NOT COVERED
────────────────────────────────────────
[any areas that were out of scope or inaccessible]
Status: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT
════════════════════════════════════════