npx claudepluginhub subinium/vibesubin --plugin vibesubinThis skill is limited to using the following tools:
A security audit that the operator will actually read.
Scans codebases for OWASP Top 10 vulnerabilities via static analysis: secret exposure, injection flaws, auth/authz gaps, supply-chain risks, misconfigurations, logging failures. Use before deployments, PR merges, auth/payment changes.
Performs security code reviews identifying high-confidence exploitable vulnerabilities like injection, XSS, authentication issues after tracing data flows and validation.
Scans local projects for dependency vulnerabilities (SCA), code security patterns (SAST), leaked secrets, auth/crypto flaws, misconfigs, supply chain risks, CI/CD issues. Generates prioritized report with remediation guidance.
Share bugs, ideas, or general feedback.
A security audit that the operator will actually read.
Enterprise scanners emit hundreds of warnings. Non-developers silence them within a week and then a real vulnerability slips through because nobody's reading anymore. The opposite strategy works better: a short, hand-curated list of patterns that are almost always actionable, each one triaged individually. Every hit gets classified. Every classification has a reason.
Never report more than ~20 findings. If the automated sweep returns 100 hits, triage them into categories and report the categories, not the raw list. The operator's attention is the scarcest resource and the wrong optimization is "comprehensive."
Any of:
.env exposed"refactor-verify)If the operator asks for something outside this skill's scope (compliance audits, penetration testing, crypto review), say so plainly and suggest a specialist tool.
Before running, establish three things with the operator. Do not start the sweep until these are clear:
If the operator doesn't know the answers, pick the most conservative defaults: whole repo, report all, assume public-facing. Tell them that's what you picked.
These are the ten categories. Each category has language-specific grep/AST patterns in references/patterns.md. Run them all; don't skip any for time.
sk-, ghp_, xoxb-, -----BEGIN, etc.)git log --all -p -S<suspect> to see if ever committed historicallyAlso check whether secret files themselves are tracked:
git ls-files | grep -iE '\.env$|\.env\.|\.pem$|id_rsa|id_ed25519|credentials|\.ppk$'
Anything returned here is an immediate HIGH regardless of content.
SQL built by pasting strings. Language-agnostic pattern:
execute, query, raw, or fetch.format() / % inside a query call+, ., ..) inside a query callPrepared-statement placeholders (?, $1, :name) are the safe form. Flag anything else.
subprocess.run(..., shell=True)child_process.exec(userInput) (as opposed to execFile/spawn with args)$() with user input in shell scriptsos.system anywhereeval / exec / Function(userInput) / setTimeout(userInput, ...) (in JS)pickle.loads on anything that might come from a networkyaml.load without SafeLoaderopen(request.something), fs.readFile(req.query.x), File.new(params[:path])FileResponse / sendFile with user-controlled path segmentsinnerHTML =, dangerouslySetInnerHTML, v-html, {@html ...} (Svelte)document.write(|safe, Django mark_safemarked does not sanitize HTML output. The historical sanitize: true option was deprecated and removed. The renderer will faithfully reproduce any <script> it finds in the input. Pair it with DOMPurify on the rendered HTML before injecting it into the DOM.markdown-it disables HTML input by default (html: false), which is safe as long as the option is not overridden. If html: true is set anywhere, the output must be sanitized downstream (DOMPurify again).showdown similarly does not sanitize. Use a sanitizer on the output.bleach (Python), sanitize-html (Node), or language-equivalent after rendering. Never trust Markdown input to be safe just because the renderer "supports" sanitization.pickle.loads / cloudpickle.loadsyaml.load / yaml.Loader (use yaml.safe_load / yaml.SafeLoader)Marshal.load (Ruby) on untrusted inputObjectInputStream (Java) on untrusted streamsunserialize (PHP) on user inputEvery set_cookie / Set-Cookie call should include httpOnly, Secure, and SameSite. Flag any that don't.
Any response header Access-Control-Allow-Origin: * on an endpoint that isn't a public CDN. Especially dangerous if combined with Access-Control-Allow-Credentials: true (actually forbidden by the spec but some code tries).
requirements.txt with no ==, package.json with ^ or *, Cargo.toml with no lockfile)package-lock.json, Cargo.lock, Gemfile.lock)if user_id == 1: style)none algorithm== instead of constant-time compareMath.random() used for session tokens / salts / OTPs (not cryptographically secure)For every hit, classify it as one of:
| Classification | Meaning | Action |
|---|---|---|
| REAL — CRITICAL | Exploitable remotely, data exposure, or secret leak | Fix now, then commit |
| REAL — HIGH | Exploitable but requires auth or specific context | Fix this sprint |
| REAL — MEDIUM | Defense-in-depth gap, not directly exploitable | Queue for cleanup |
| FALSE POSITIVE | The pattern matched but the code is actually safe | Explain why it's safe and move on — do not "fix" it |
| NEEDS REVIEW | You can't tell without more context (e.g., is this input trusted?) | Ask the operator one specific question |
Never just list findings without classification. A raw list of grep matches is worse than no audit, because the operator can't tell signal from noise.
When explaining a false positive, be specific: "This f-string inside a query is safe because days is an integer from int(request.query.get(...)) clamped to 0..365 on line 391." The specificity proves you actually looked.
Per-pattern triage rules: references/false-positive-triage.md
Always structured. Never a prose wall.
# Security sweep — <scope>
## Summary
- Scope: <files/dirs swept>
- Runtime context: <public / authenticated / internal>
- Total findings: <N>
- Triage: <X critical, Y high, Z medium, W false-positive>
## Critical (<N>)
### 1. <Category> — <file:line>
**What was found:** <quoted line>
**Why it's real:** <one-sentence reason>
**Fix:** <concrete code change or command>
### 2. ...
## High (<N>)
...
## Medium (<N>)
...
## False positives (<N>)
Listed only to show they were checked. No action needed.
### 1. <Category> — <file:line>
**Why not a vulnerability:** <specific reason>
## Needs review (<N>)
One specific question per item so the operator can answer and close it.
### 1. <Category> — <file:line>
**Question:** <precise question>
This skill is a minimal hand-curated sweep, not a full application security audit. If the operator needs any of the following, say so plainly and recommend a dedicated tool or human reviewer.
| Class of issue | Why this skill doesn't cover it | What to use instead |
|---|---|---|
| SSRF (Server-Side Request Forgery) | Requires dataflow tracing from user input to outbound HTTP clients. Language-agnostic grep is too coarse to avoid false positives. | Semgrep with SSRF rulepacks; language-specific SAST |
| CSRF (Cross-Site Request Forgery) | Framework-specific. Django, Rails, Next.js each have their own CSRF story. A generic checker gives false confidence. | Verify the framework's CSRF middleware is enabled and tokens are required on state-changing routes |
| IDOR (Insecure Direct Object Reference) | Requires understanding the application's authorization model. "Does this user have permission to read /orders/42?" cannot be answered by grep. | Manual review of every route that takes an ID from the URL; pen test for critical flows |
| Unsafe file upload | Requires runtime behavior (magic bytes, content-type validation, storage isolation). A pattern match catches obvious cases but misses most. | Dedicated upload validation libraries; storage in a separate origin |
| Open redirect | Partially covered (hardcoded redirects), but dynamic redirect targets built from query parameters need dataflow analysis. | Whitelist allowed redirect domains; Semgrep open-redirect rulepack |
| Business logic flaws | "The coupon code can be used twice" is a logic bug that no scanner can find. | Pen test; exploratory testing; code review |
| Crypto primitive choice | "You're using AES-CBC without HMAC" is a crypto-design issue. Not a pattern match. | Cryptography review by someone qualified |
| Supply chain compromises in transitive deps | Requires a full SBOM + CVE database join. Out of scope for a single grep sweep. | pip-audit / npm audit / cargo audit / Dependabot |
| Compliance frameworks (SOC 2, HIPAA, PCI-DSS, GDPR) | Legal / procedural, not technical. | Compliance consultant |
When the operator asks for one of these, respond with: "That's outside what audit-security does well. Here's what it would take to actually cover it: [link or tool]. Do you want me to run the standard audit-security sweep in the meantime?"
If the sweep finds a live credential leak, or the operator says "I accidentally pushed my .env", switch immediately to incident mode. Do not run the full triage. Do the following in order:
Before anything else. Cleanup comes after rotation — as long as the old credentials are still valid, the attacker's window is open.
~/.ssh/authorized_keys on every server, generate a new keypairDo not skip this step because "the repo is private." Private repos have been exfiltrated by compromised collaborator accounts, leaked CI logs, cloned forks, and accidental public-setting toggles. Treat every exposure as public.
The secret is still in every past commit until you rewrite history.
# Modern tool (recommended)
git filter-repo --path <path/to/leaked/file> --invert-paths
# or for a specific string:
git filter-repo --replace-text <file-with-patterns>.txt
# Legacy alternative (BFG Repo-Cleaner):
bfg --delete-files <leaked-file>
bfg --replace-text <file-with-secrets>.txt
Then force-push:
git push --force-with-lease origin --all
git push --force-with-lease origin --tags
Warn the operator before force-pushing. All collaborators need to re-clone; their existing clones will diverge.
.envAsk the operator: "Was this secret also used in any other project, service, or environment?" Credentials are often reused — the leak of one may mean several places are compromised.
Write a short post-mortem in docs/security/incidents/<date>-<what-happened>.md:
.env committed directly? secret in .github/workflows/*.yml? hardcoded in source?).gitignore update, .env.example audit)Hand the file off to write-for-ai to format. This file is for future AI sessions so the same mistake doesn't recur.
Add these guards before closing the incident:
.gitignore entries for every secret file type (hand off to manage-secrets-env)audit-security (or at minimum a secrets-scanning step like gitleaks) on every commitmain so no one can force-push againThings to watch for in your own output:
eval( and missing a .env file sitting in git ls-files. Always check tracked secrets first; it's the highest signal-per-second category.When the task context contains the tone=harsh marker (usually set by the /vibesubin harsh umbrella invocation, but can also come from direct requests like "don't sugarcoat" / "brutal review" / "매운 맛"), switch output rules:
src/api/users.py:47", not "potential information disclosure in the users endpoint".Harsh mode does not invent findings, fabricate CVSS scores, or become rude. Every harsh statement must be backed by the same evidence the balanced version would cite. The change is framing, not substance.
refactor-verify for the fix.env files → hand off to manage-secrets-env for the remediation pattern (rotate, remove from history, add to gitignore, re-examine collaborators)setup-cifight-repo-rotreferences/patterns.md — concrete grep / AST patterns per categoryreferences/false-positive-triage.md — how to classify borderline hitsOptional helper tools (the pack does not require them, but uses them when available): Semgrep, Bandit (Python), ESLint-security (JS), gosec (Go), cargo-audit (Rust), pip-audit (Python), npm audit (Node), gitleaks / trufflehog for secret scanning.