From sast-plugin
Run a full SAST (Static Application Security Testing) pipeline across a codebase or set of files, orchestrating Semgrep, Bandit, Trufflehog, and Safety/pip-audit. Produces a structured findings report and a prioritised remediation plan. Use this skill whenever the user wants to: audit code for security issues, run static analysis, check for secrets or credentials in source, scan Python dependencies for CVEs, review a PR or branch for new vulnerabilities, set up a pre-commit or CI security gate, generate a security report, triage SAST findings, or get fix recommendations for security issues. Also trigger when the user mentions any of: Semgrep, Bandit, Trufflehog, Safety, pip-audit, SAST, secrets scanning, dependency audit, or CVE scanning.
npx claudepluginhub darkflib/skill-marketplace --plugin sast-pluginThis skill uses the workspace's default tool permissions.
Orchestrates a four-tool SAST pipeline, normalises findings into a unified
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Orchestrates a four-tool SAST pipeline, normalises findings into a unified severity model, diffs against a baseline for delta/CI mode, and produces:
| Mode | When to use | Baseline behaviour |
|---|---|---|
audit | Full project scan, initial review | Regenerates baseline |
pr | Ad-hoc code review / branch diff | Diffs against existing baseline |
ci | Pre-commit gate, pipeline step | Diffs; non-zero exit on NEW findings ≥ MEDIUM |
Default mode: audit if no baseline exists, pr otherwise.
Before running anything, establish:
audit, pr, or ci. Infer from context (PR mention → pr,
CI/pipeline mention → ci, otherwise audit)..sast-baseline.json in project root.Check tool availability. For each tool, attempt version probe; note missing tools but do not abort — run what's available and report gaps clearly.
semgrep --version 2>/dev/null || echo "MISSING: semgrep"
bandit --version 2>/dev/null || echo "MISSING: bandit"
trufflehog --version 2>/dev/null || echo "MISSING: trufflehog"
pip-audit --version 2>/dev/null || safety --version 2>/dev/null || echo "MISSING: safety/pip-audit"
Installation hints (surface to user if tools missing):
pip install semgrep bandit pip-audit --break-system-packages
brew install trufflehog # macOS
# or: docker run ghcr.io/trufflesecurity/trufflehog:latest filesystem .
Read references/tools.md for full install options, rule set URLs, and
tool-specific flags before proceeding.
Run all available tools. Capture stdout/stderr; treat non-zero exit as "findings present", not "tool failed" — distinguish via stderr content.
semgrep scan \
--config auto \
--json \
--severity WARNING \
--no-rewrite-rule-ids \
<target_path> 2>/dev/null
For JS/TS targets, add --config "p/javascript". For Python, auto covers it.
See references/tools.md → Semgrep section for curated rule sets.
bandit -r <target_path> \
-f json \
-ll \
--quiet \
2>/dev/null
-ll = LOW confidence suppressed (MEDIUM+ only). Add -x tests/,venv/ to
exclude noise from test directories.
trufflehog filesystem <target_path> \
--json \
--no-update \
2>/dev/null
For Git repos, prefer trufflehog git file://<target_path> to scan history.
Note: Trufflehog exits 0 regardless of findings — parse stdout line count.
# Prefer pip-audit if available
pip-audit --format json --output - 2>/dev/null
# Fallback to Safety
safety check --json 2>/dev/null
If neither a requirements*.txt nor a pyproject.toml is present, note this
and skip rather than producing a spurious "no vulnerabilities" result.
Map each tool's output to the unified finding schema (see references/severity.md
for full mapping tables):
{
"id": "<tool>-<hash-of-path+rule>",
"tool": "semgrep|bandit|trufflehog|pip-audit",
"severity": "CRITICAL|HIGH|MEDIUM|LOW|INFO",
"confidence": "HIGH|MEDIUM|LOW",
"category": "secret|vuln|sca|misconfig",
"rule_id": "...",
"title": "...",
"description": "...",
"file": "...",
"line_start": 0,
"line_end": 0,
"cwe": ["CWE-XXX"],
"cvss": null,
"fix_available": false,
"suppressed": false
}
Default severity filter: Drop INFO from the report unless user requests
verbose output. Surface MEDIUM, HIGH, CRITICAL prominently. Keep LOW
in JSON but de-emphasise in Markdown.
If .sast-baseline.json exists and mode is pr or ci:
id).new — not in baselineresolved — in baseline but not in current scanexisting — present in bothci mode: if any new findings with severity ≥ MEDIUM exist, set
exit flag (report this clearly; do not actually exit Claude's process).new findings; existing findings summarised only.In audit mode, write/overwrite .sast-baseline.json with current findings.
# SAST Report — <project name> — <ISO date>
## Executive Summary
<1–2 sentences: overall posture, critical count, new-vs-existing split>
## Findings by Severity
### 🔴 CRITICAL (N findings)
### 🟠 HIGH (N findings)
### 🟡 MEDIUM (N findings)
### 🟢 LOW (N findings — detail in JSON)
## Secrets / Credential Exposure
<Trufflehog findings, always surfaced regardless of severity mapping>
## Dependency Vulnerabilities (SCA)
<pip-audit / Safety findings, with CVE IDs and CVSS scores where available>
## Tool Coverage
<table: tool | status | findings count | rule set used>
## Baseline Delta (pr/ci mode only)
<new / resolved / existing counts>
Each finding block:
### <severity badge> <title>
- **File:** `path/to/file.py:42`
- **Rule:** `<rule_id>`
- **CWE:** CWE-XXX — <name>
- **Description:** <normalised description, 1–2 sentences>
- **Fix hint:** <brief remediation; expand if fix suggestions requested>
Write full findings array plus metadata envelope to sast-report-<timestamp>.json:
{
"schema_version": "1.0",
"generated_at": "<ISO8601>",
"mode": "audit|pr|ci",
"target": "<path>",
"tools_run": [...],
"summary": { "critical": 0, "high": 0, "medium": 0, "low": 0, "info": 0 },
"findings": [...],
"baseline_delta": { "new": 0, "resolved": 0, "existing": 0 }
}
After the report, produce a ranked action list:
Ranking formula (descending priority):
For each item in the plan:
## Priority N — <title>
**Risk:** <1 sentence why this matters>
**Effort:** Low | Medium | High
**Action:** <concrete remediation step — rotate cred, patch dep, refactor call>
**References:** <CWE link, CVE link, Semgrep rule docs>
If fix suggestions were requested, append a **Suggested fix:** block with a
corrected code snippet. Mark clearly as AI-suggested; recommend human review
before merge.
sast-report-<timestamp>.json and present the file.audit mode), note the new .sast-baseline.json
location.requirements*.txt or
pyproject.toml, run pip-audit once per manifest and aggregate.--include or
--exclude flags to scope the scan.git mode can be slow on deep histories;
add --since-commit HEAD~50 for PR mode.# nosec (Bandit) and # nosemgrep inline
suppressions but count and report them separately.references/tools.md — Tool-specific flags, rule sets, install options,
known quirks. Read before Step 2 if unfamiliar with a tool's current CLI.references/severity.md — Severity mapping tables for each tool, CWE
taxonomy, CVSS → severity mapping. Read before Step 3.