From soundcheck
Flags insecure LLM output handling in code to prevent XSS, command injection, and SQL injection. Use when rendering to UI, executing generated code/shell, or passing to DB/APIs.
npx claudepluginhub thejefflarson/soundcheck --plugin soundcheckThis skill uses the workspace's default tool permissions.
Protects against XSS, command injection, and second-order injection that arise when
Detects prompt injection vulnerabilities in LLM code constructing prompts from user input, system prompts, RAG pipelines, or external data. Suggests fixes with trust tiers, delimiters, input screening, and output validation.
Audits AI-generated code and LLM applications for security vulnerabilities, covering OWASP Top 10 for LLMs, secure coding patterns, and AI-specific threat models.
Applies LangChain security best practices: secrets management, prompt injection defense, safe tool execution, and LLM output validation for production apps.
Share bugs, ideas, or general feedback.
Protects against XSS, command injection, and second-order injection that arise when LLM output is treated as trusted. The model may produce malicious content through prompt injection or hallucination; downstream systems must sanitize it the same way they would sanitize raw user input.
element.innerHTML = llmResponse — injects attacker-controlled HTML/JS into the DOMexec(llm_generated_code) or subprocess.run(llm_command, shell=True) — arbitrary code executiondb.execute(f"SELECT * FROM {llm_output}") — LLM output lands in a SQL statement unsanitizeddangerouslySetInnerHTML prop without sanitizationFlag the vulnerable code and explain the risk. Then suggest a fix that establishes these properties:
textContent / auto-escaping template
for plain text; a sanitizer (DOMPurify, bleach) for rich content. Never
innerHTML, dangerouslySetInnerHTML, or v-html with a raw LLM string.shell=True, never eval, never
os.system(raw_output).injection skill.Anchor — shape, not implementation:
# HTML
element.textContent = llm_out # or sanitize(llm_out) for rich
# shell
require(parse(llm_out)[0] in ALLOWED_COMMANDS)
run(parse(llm_out), shell=False, timeout=10)
# SQL
db.execute("SELECT ... WHERE name = ?", [llm_out]) # parameterized
Confirm the response:
innerHTML or dangerouslySetInnerHTML without DOMPurify.sanitizeshell=True is never passed with LLM-derived input