By cmaenner
Audit AI agents, LLMs, MCP servers, APIs, web apps, and codebases for security risks using OWASP Top 10 frameworks, detecting prompt injections, excessive permissions, secret leaks, dependency CVEs, and threat models. Review configurations, dependencies, and agentic workflows to ensure secure AI deployments.
npx claudepluginhub cmaenner/agent-security-playbookComprehensive API security review against OWASP API Security Top 10 (2023). Use when reviewing OpenAPI/Swagger specs, auditing REST/GraphQL/gRPC implementations, testing authentication mechanisms, or checking API gateway configurations. Covers BOLA/IDOR, broken auth, mass assignment, rate limiting, SSRF, and more with real-world attack scenarios.
Security-focused code review mapped to OWASP Top 10 and ASVS. Use when reviewing pull requests, auditing files or modules for vulnerabilities, or performing pre-merge security gate checks. Covers injection, auth, authorization, cryptography, data exposure, misconfiguration, and deserialization.
Comprehensive LLM security assessment against OWASP Top 10 for LLM Applications 2025. Use when reviewing LLM-integrated applications, RAG pipelines, chatbots, AI agents, or GenAI features. Covers prompt injection, data poisoning, supply chain, excessive agency, and more with real-world attack scenarios and testing methodologies.
Security review of MCP (Model Context Protocol) server implementations and configurations. Use when auditing MCP server source code, evaluating third-party MCP servers before installation, or reviewing Claude Code MCP integrations for overpermissioning, injection risks, and data exposure.
Comprehensive threat modeling for multi-agent systems using CSA MAESTRO 7-layer framework and OWASP Multi-Agentic System Threat Modeling Guide v1.0. Systematically analyzes threats across all architectural layers from foundation models to agent ecosystems.
Test LLM-integrated applications against known prompt injection techniques, evasion methods, and attack intents using the Arcanum PI Taxonomy. Use when red-teaming AI apps, validating guardrails, or deepening LLM01 (Prompt Injection) assessments.
Audit AI agent configurations for security risks — excessive permissions, prompt injection surfaces, data exfiltration paths, and missing guardrails. Use when reviewing CLAUDE.md files, MCP configs, agent orchestration code, or any AI agent setup.
Assess agentic AI applications against the OWASP Top 10 for Agentic Applications 2026. Use when reviewing autonomous AI agents, multi-agent systems, or agentic workflows for security risks including goal hijacking, tool misuse, privilege abuse, and rogue agent behavior.
Scan project dependencies for known vulnerabilities (CVEs). Use when reviewing dependency files (package.json, requirements.txt, go.mod, pom.xml, Gemfile, Cargo.toml, etc.), triaging Dependabot/Renovate alerts, or performing pre-deployment security checks.
Detect hardcoded credentials, API keys, tokens, and secrets in source code and configuration files. Use when reviewing code for leaked secrets before commit/merge, auditing a repository for credential exposure, or setting up secret detection.
Analyze code for securable qualities using the OWASP FIASSE/SSEM framework. Use when assessing code securability, evaluating engineering attributes that impact security (analyzability, modifiability, testability, confidentiality, accountability, authenticity, availability, integrity, resilience), reviewing merge requests through a securable engineering lens, or establishing a security posture baseline. Complements vulnerability-centric reviews by focusing on whether code is able to accommodate fixes for security findings and is engineered to remain securable over time.
Meta-skill that wraps code generation to enforce OWASP FIASSE securable coding attributes and principles. Use when generating, scaffolding, or refactoring code so that the output is engineered to be inherently securable by default. Applies the nine SSEM attributes (Analyzability, Modifiability, Testability, Confidentiality, Accountability, Authenticity, Availability, Integrity, Resilience), the Transparency principle, and OWASP FIASSE defensive coding practices to every code generation task. Invoke this skill alongside or instead of raw code generation when the user asks for secure code, securable code, FIASSE-compliant code, or when generating security-sensitive components (auth, input handling, data access, API endpoints, trust boundaries).
Security-first development guidance based on OWASP ASVS (Application Security Verification Standard). Use this skill automatically when planning or implementing any code that touches user input, authentication, data persistence, network communication, file I/O, cryptography, or access control. This skill ensures all generated code adheres to industry-standard security practices with explicit references to applied guidance.
Review web applications against the OWASP Top 10 for Web Applications (2021). Use when auditing web apps, reviewing server-side code, or assessing web frameworks for the classic OWASP Top 10 risks including injection, broken auth, and XSS.
Runtime security for AI agents. Blocks destructive actions before execution, routes high-risk operations through human approval, and maintains an immutable audit trail. Covers OWASP MCP Top 10, ASI Top 10, and Agentic Skills Top 10.
GoPlus AgentGuard — AI agent security guard. Blocks dangerous commands, prevents data leaks, protects secrets. 20 detection rules, runtime action evaluation, trust registry.
Skeptical-reading and prompt-injection defense for AI coding agents. Trust nothing. Ship safely.
Automated OWASP security checks — Web Top 10:2025, LLM Top 10:2025, API Security Top 10:2023
Security testing skills for AI-assisted IDEs and coding agents. 25 vulnerability patterns across code execution, prompt injection, data exfiltration, and trust persistence.
Share bugs, ideas, or general feedback.
Audits GitHub Actions workflows for security vulnerabilities in AI agent integrations (Claude Code Action, Gemini CLI, OpenAI Codex, GitHub AI Inference)
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim