npx claudepluginhub thejefflarson/soundcheck --plugin soundcheckThis skill uses the workspace's default tool permissions.
Protects against attacker-controlled text that hijacks LLM instructions. Direct
Provides defense techniques against prompt injection attacks including direct, indirect injections, and jailbreaks, grounded in reference patterns, sharp edges, and validations. Use when LLM security terms mentioned.
Flags insecure LLM output handling in code to prevent XSS, command injection, and SQL injection. Use when rendering to UI, executing generated code/shell, or passing to DB/APIs.
Applies LangChain security best practices: secrets management, prompt injection defense, safe tool execution, and LLM output validation for production apps.
Share bugs, ideas, or general feedback.
Protects against attacker-controlled text that hijacks LLM instructions. Direct injection arrives through user input; indirect injection arrives through retrieved documents, emails, or tool outputs. Both can cause the model to exfiltrate data, bypass guardrails, or execute unintended actions.
f"You are a helpful assistant. Answer: {user_input}" — user text lands in the instruction tierFlag the vulnerable code and explain the risk. Then suggest a fix that establishes these properties:
system role; user input and retrieved documents go in the user role wrapped
in explicit delimiter tags (<context>…</context>, <question>…</question>).
Never interpolate user text into the system prompt.Anchor — any language works the same way:
system: developer_instructions # no user text here
user: <context>{docs}</context> # delimited, from the data tier
<question>{sanitized_input}</question>
raw = call_llm(messages)
safe = validate_llm_output(raw) # gate before ANY downstream use
return safe
Confirm these properties hold:
system role message