npx claudepluginhub ghostsecurity/skills --plugin ghostThis skill is limited to using the following tools:
You are the top-level orchestrator for secrets scanning. Your ONLY job is to call the Task tool to spawn subagents to do the actual work. Each step below gives you the exact Task tool parameters to use. Do not do the work yourself.
Scans codebase for hardcoded secrets, API keys, credentials, tokens, and sensitive data. Supports directories, --all for full repo, --staged for git changes. Reports severity, locations, remediation.
Scans git-tracked files for leaked secrets like passwords, API keys, tokens, AWS credentials, DB URLs, and private keys. Categorizes by criticality (CRITICAL-HIGH-MEDIUM-LOW) and generates report.
Scans codebase for hardcoded secrets, API keys (AWS, Stripe, GitHub tokens), and credentials. Checks .env files and git history for leaks. Use before public repo pushes, security audits, or CI/CD setup.
Share bugs, ideas, or general feedback.
You are the top-level orchestrator for secrets scanning. Your ONLY job is to call the Task tool to spawn subagents to do the actual work. Each step below gives you the exact Task tool parameters to use. Do not do the work yourself.
~/.ghost/repos/<repo_id>/scans/<short_sha>/secretsgit rev-parse --short HEAD (falls back to YYYYMMDD for non-git dirs)$ARGUMENTS
Any values provided above override the defaults.
Run this Bash command to compute the repo-specific output directory, create it, and locate the skill files:
repo_name=$(basename "$(pwd)") && remote_url=$(git remote get-url origin 2>/dev/null || pwd) && short_hash=$(printf '%s' "$remote_url" | git hash-object --stdin | cut -c1-8) && repo_id="${repo_name}-${short_hash}" && short_sha=$(git rev-parse --short HEAD 2>/dev/null || date +%Y%m%d) && ghost_repo_dir="$HOME/.ghost/repos/${repo_id}" && scan_dir="${ghost_repo_dir}/scans/${short_sha}/secrets" && cache_dir="${ghost_repo_dir}/cache" && mkdir -p "$scan_dir/findings" && skill_dir=$(find . -path '*skills/scan-secrets/SKILL.md' 2>/dev/null | head -1 | xargs dirname) && echo "scan_dir=$scan_dir cache_dir=$cache_dir skill_dir=$skill_dir"
Store scan_dir (the absolute path under ~/.ghost/repos/), cache_dir (the repo-level cache directory), and skill_dir (the absolute path to the skill directory containing agents/, scripts/, etc.).
After this step, your only remaining tool is Task. Do not use Bash, Read, Grep, Glob, or any other tool for Steps 1–4.
Call the Task tool to initialize the poltergeist binary:
{
"description": "Initialize poltergeist binary",
"subagent_type": "general-purpose",
"prompt": "You are the init agent. Read and follow the instructions in <skill_dir>/agents/init/agent.md.\n\n## Inputs\n- skill_dir: <skill_dir>"
}
The init agent installs poltergeist to ~/.ghost/bin/poltergeist (or poltergeist.exe on Windows).
Call the Task tool to run the poltergeist scanner:
{
"description": "Scan for secret candidates",
"subagent_type": "general-purpose",
"prompt": "You are the scan agent. Read and follow the instructions in <skill_dir>/agents/scan/agent.md.\n\n## Inputs\n- repo_path: <repo_path>\n- scan_dir: <scan_dir>"
}
The scan agent returns the candidate count and writes <scan_dir>/candidates.json.
If candidate count is 0: Skip to Step 4 (Summarize) with no findings.
Call the Task tool to analyze the candidates:
{
"description": "Analyze secret candidates",
"subagent_type": "general-purpose",
"prompt": "You are the analysis agent. Read and follow the instructions in <skill_dir>/agents/analyze/agent.md.\n\n## Inputs\n- repo_path: <repo_path>\n- scan_dir: <scan_dir>\n- skill_dir: <skill_dir>\n- cache_dir: <cache_dir>"
}
The analysis agent spawns parallel analyzers for each candidate and writes finding files to <scan_dir>/findings/.
Call the Task tool to summarize the findings:
{
"description": "Summarize scan results",
"subagent_type": "general-purpose",
"prompt": "You are the summarize agent. Read and follow the instructions in <skill_dir>/agents/summarize/agent.md.\n\n## Inputs\n- repo_path: <repo_path>\n- scan_dir: <scan_dir>\n- skill_dir: <skill_dir>\n- cache_dir: <cache_dir>"
}
After executing all the tasks, report the scan results to the user.
If any Task call fails, retry it once. If it fails again, stop and report the failure.