From agent-infra-security
Generates interactive triage checklists, incident runbooks, or shell scripts to investigate and remediate PyPI supply chain attacks on compromised Python packages.
npx claudepluginhub makash/agent-infra-security --plugin agent-infra-securityThis skill uses the workspace's default tool permissions.
Help developers triage, investigate, contain, and recover from a compromised Python package on PyPI.
Triages npm supply chain attacks and compromised packages with interactive checklists, full incident runbooks, or detection shell scripts.
Audits developer workstations for exposed credentials in 20+ categories, supply chain compromises via IoC library, persistence mechanisms, and compromised app data. Use for suspected hacks, secret scans, or dev environment audits.
Hunts supply chain compromise indicators like trojanized updates, compromised dependencies, unauthorized code mods, and tampered artifacts using EDR/SIEM workflows.
Share bugs, ideas, or general feedback.
Help developers triage, investigate, contain, and recover from a compromised Python package on PyPI.
This skill produces one of three outputs depending on what the user asks for:
check_compromise.sh script that automates detection, reports findings, and prompts before any remediation action.If the user doesn't specify which format, default to the interactive triage checklist. If the user says something like "just give me everything" or "runbook", produce the full markdown document. If they say "script" or "automate", generate the shell script.
Before producing any output, collect the following from the user. If they've already provided some of this in the conversation, don't re-ask.
Required:
litellm)1.82.7, 1.82.8)Helpful but not required (use defaults or skip if the user doesn't know):
1.82.6)references/ioc-patterns.md).Every output format follows these six phases in order. The depth and format change based on the output type, but the sequence is always the same.
The goal is to determine whether the compromised package exists anywhere in the user's environments, including as a transitive dependency they never directly installed.
Commands to guide the user through:
Check if the package is installed and what version:
pip show <PACKAGE> | grep -E "^(Name|Version|Location)"
uv pip show <PACKAGE>
poetry show <PACKAGE>
conda list <PACKAGE>
Find what pulled it in as a transitive dependency (this is the critical step most developers miss):
pip install pipdeptree
pipdeptree -r -p <PACKAGE>
The reverse tree shows the chain. If dspy depends on litellm>=1.64.0, the user will see it here even though they only ran pip install dspy.
Hunt across ALL environments on the machine — developers often have multiple venvs, conda envs, global installs, and package manager caches:
find / -path "*/site-packages/<PACKAGE>" -type d 2>/dev/null
find / -name "*<PACKAGE>*.pth" 2>/dev/null
find ~/.cache/uv -name "*<PACKAGE>*" 2>/dev/null
pip cache list <PACKAGE>
Check Docker images built during the attack window:
docker images --format "{{.Repository}}:{{.Tag}} {{.CreatedAt}}" | sort -k2
docker run --rm <IMAGE> pip show <PACKAGE>
Search requirements and lockfiles for unpinned or loosely pinned references:
grep -rn "<PACKAGE>" --include="*.txt" --include="*.toml" --include="*.lock" --include="*.cfg" .
Flag dangerous patterns: >=, ~=, >, or no version pin at all.
Check CI/CD logs for the package version string during the attack window:
grep -r "<PACKAGE>==" /path/to/ci/logs/
If Phase 1 found the package, confirm whether the installed version matches a compromised version.
pip show <PACKAGE> | grep Version
Check file timestamps to estimate when the package was installed:
stat $(python -c "import <PACKAGE>; print(<PACKAGE>.__file__)")
For .pth-based attacks (increasingly common), scan site-packages for .pth files containing suspicious patterns:
SITE=$(python -c "import site; print(site.getsitepackages()[0])")
find "$SITE" -name "*.pth" -exec grep -l "base64\|subprocess\|exec\|eval\|compile" {} \;
Check pip install logs if available:
cat ~/.pip/pip.log 2>/dev/null | grep <PACKAGE>
grep -r "<PACKAGE>" ~/.cache/pip/http/ 2>/dev/null | head -20
After confirming the version, classify the finding into one of these five categories to communicate risk clearly:
Use this classification in the output for each environment reviewed so the user and their team can quickly understand the severity.
If the user confirmed they had a compromised version, look for evidence that the payload ran. Read references/ioc-patterns.md for the built-in pattern library. Combine those patterns with any attack-specific IOCs the user provides.
Filesystem persistence:
find ~/.config -name "*.py" -mtime -3 2>/dev/null
find ~/.config/systemd/user/ -name "*.service" -mtime -3 2>/dev/null
crontab -l 2>/dev/null
ls -la /etc/cron.d/ 2>/dev/null
Network indicators:
ss -tnp | grep python
netstat -tnp 2>/dev/null | grep python
If the user has specific C2 domains from the advisory, search logs:
grep -rF "<C2_DOMAIN>" /var/log/ 2>/dev/null
Process inspection:
ps aux | grep python | grep -v grep
Kubernetes (if applicable):
kubectl get pods -n kube-system --sort-by=.metadata.creationTimestamp
kubectl get secrets --all-namespaces -o json | jq '.items[] | select(.metadata.creationTimestamp > "<ATTACK_DATE>")'
Credential access evidence:
find ~/.ssh ~/.aws ~/.config/gcloud ~/.kube -atime -1 2>/dev/null
stat ~/.ssh/id_rsa 2>/dev/null | grep Access
Before removing anything, preserve evidence if your organization may need forensic analysis. Copy or snapshot affected environments, save pip inspect output, screenshot active network connections, and export relevant logs. Once you uninstall packages and purge caches, that evidence is gone.
# Preserve evidence before cleanup
python -m pip inspect > pip-inspect-evidence-$(date +%Y%m%d-%H%M%S).json
python -m pip freeze > pip-freeze-evidence-$(date +%Y%m%d-%H%M%S).txt
ss -tnp > network-connections-$(date +%Y%m%d-%H%M%S).txt 2>/dev/null
cp -r $(python -c "import site; print(site.getsitepackages()[0])") site-packages-backup-$(date +%Y%m%d-%H%M%S)/ 2>/dev/null
Remove the compromised package and purge caches so it can't be reinstalled from a cached wheel.
pip uninstall <PACKAGE> -y
pip cache purge
rm -rf ~/.cache/uv
rm -rf ~/.cache/pip
If persistence artifacts were found in Phase 3, remove them. In the interactive checklist, describe each artifact and ask the user to confirm before deleting. In the shell script, prompt with read -p.
Pin to a known-safe version:
echo "<PACKAGE>==<SAFE_VERSION>" >> requirements.txt
For Docker: rebuild images from a clean base, pinning the safe version. Don't just docker exec into running containers.
Hand off to the credential-exfiltration-response skill for systematic rotation.
Before handing off, scope what was accessible on the compromised system so the credential skill knows what to rotate:
# Find all .env files and list the keys (not values) that need rotation
find . -name ".env*" -exec grep -h "KEY\|SECRET\|TOKEN\|PASSWORD\|CREDENTIAL" {} \; | cut -d= -f1 | sort -u
# List credential files present
ls ~/.ssh/id_* ~/.aws/credentials ~/.config/gcloud/application_default_credentials.json ~/.kube/config ~/.npmrc ~/.pypirc ~/.docker/config.json 2>/dev/null
Tell the credential-exfiltration-response skill:
The credential skill will walk through detection of abuse, rotation for each credential class, and verification that old credentials are truly invalidated (including provider-specific delays like AWS STS sessions surviving key deletion for up to 36 hours).
These are the structural improvements to prevent the next supply chain attack from having the same impact.
Pin exact versions in requirements:
pip freeze | grep <PACKAGE>
# Use == pinning, not >= or ~=
Generate an SBOM so you can answer "am I affected?" in seconds next time:
pip install cyclonedx-bom
cyclonedx-py requirements -i requirements.txt -o sbom.json
Run pip-audit in CI:
pip install pip-audit
pip-audit
For uv users, use --exclude-newer to freeze the supply chain timeline:
uv pip install --exclude-newer "2026-03-23T00:00:00Z" <PACKAGE>
Scope secrets in CI/CD:
Instead of workflow-level environment variables, pass secrets only to the specific step that needs them. This limits blast radius if a dependency runs code during pip install.
Use Trusted Publishing for your own packages: OIDC tokens scoped to CI instead of long-lived PyPI API tokens.
Lockfiles with hashes: pip-compile --generate-hashes or uv pip compile --generate-hashes to detect tampered wheels.
Walk the user through one phase at a time. After each phase, ask what they found before proceeding to the next. Adapt the remaining phases based on their answers. For example, if Phase 1 shows they're not exposed, stop and tell them they're clear — don't walk through IOC hunting.
Structure each phase as:
Produce a markdown document with all six phases, all commands pre-filled with the specific package name and versions, IOC domains, and persistence paths from the advisory. Include a summary header with the incident metadata (package, versions, attack window, IOC domains). This is meant to be shared with a team, so write it to be self-contained — someone reading it for the first time should understand what happened and what to do.
Save this as a .md file using the create_file tool.
Generate a bash script called check_compromise.sh that:
read -p before any destructive action (uninstall, cache purge, file deletion)--dry-run flag that skips all prompts and just reportsRead scripts/check_compromise_template.sh for the template. Customize it with the specific package details from the user's context.
Save this using the create_file tool and make it executable.
When producing the full incident response runbook or interactive checklist, include this template at the end so the user can document their findings.
pipdeptree -r step is often the most important single command in the entire playbook.credential-exfiltration-response skill for systematic rotation — it handles the full detect/rotate/verify lifecycle.