By mindgard
Test AI-assisted IDEs and coding agents for 25 security vulnerability patterns across code execution, prompt injection, data exfiltration, and trust persistence by mapping attack surfaces, planning multi-stage chains, auditing source code, and exploiting interaction tiers from zero-click to trusted workspace.
npx claudepluginhub mindgard/ai-ide-skills --plugin ai-ide-vuln-skillsPlans and constructs multi-stage attack chains against AI IDEs. Use when combining vulnerability primitives into end-to-end exploits, assessing overall IDE security posture, or mapping how individual vulnerabilities chain together through the file-write pivot point. Each chain is classified by interaction tier to prioritize reportable findings.
Tests AI IDEs for code execution vulnerabilities beyond MCP and terminal filters. Use when assessing hooks abuse, binary planting, IDE settings exploitation, tools definition auto-loading, or environment variable prefixing attack vectors. Patterns are ordered by interaction tier: Tier 1 (zero-interaction) through Tier 4 (trusted workspace + specific action).
Tests AI IDEs for data exfiltration vulnerabilities. Use when assessing markdown image rendering, Mermaid diagram abuse, pre-configured URL fetching, model provider redirect, webview rendering, or other outbound data channels in AI coding assistants.
Maps attack surface of AI-assisted IDEs before vulnerability testing. Use when starting a security assessment of an AI IDE, analyzing IDE documentation for security blind spots, or enumerating config files and auto-load paths. Works for both open-source and closed-source targets. Annotates every discovered feature with an interaction tier so testing prioritizes zero-click and agent-mediated vectors first.
Guides source code auditing of open-source AI IDEs for security vulnerabilities. Use when reviewing AI IDE source code, analyzing command filtering implementations, auditing MCP integration code, or assessing file-write permission models in open-source AI coding tools.
Tests AI IDEs for MCP configuration poisoning vulnerabilities. Use when auditing MCP integration security, testing whether an IDE auto-loads untrusted MCP server definitions from workspace config files, or assessing MCP tool approval and invocation controls. Assessment is structured around four interaction tiers, tested in priority order from zero-click auto-load to TOCTOU on trusted workspaces.
Tests AI IDEs for prompt injection vulnerabilities that enable config modification and privilege escalation. Use when assessing adversarial directory attacks, prompt template auto-loading, rules override, or file-write-to-config-modification attack chains. Patterns are organized by interaction tier from highest to lowest severity.
Tests terminal command filtering and allowlist implementations in AI IDEs for bypass vulnerabilities. Use when assessing command execution controls, testing shell injection vectors, or evaluating allowlist/blocklist implementations in AI coding agents.
Share bugs, ideas, or general feedback.
Runs code reviews using external LLM CLIs (OpenAI Codex, Google Gemini) on uncommitted changes, branch diffs, or specific commits. Bundles Codex's built-in MCP server for direct tool access.
AI-powered whitebox penetration testing plugin for Claude Code. 9 languages, 27 skills, 8 autonomous agents. STRIDE threat modeling, hotspot-aware findings, SARIF output, and polyglot monorepo support.
Security best practices advisor with vulnerability detection and fixes
Auto-scan repositories and packages for security threats on install/clone
Security research toolkit for discovering and remediating vulnerabilities