From soundcheck
Reviews code for sensitive data leaks to LLMs including PII/credentials in prompts, raw outputs, and shared memory. Flags OWASP LLM06 patterns and suggests redaction/pseudonymization fixes.
npx claudepluginhub thejefflarson/soundcheck --plugin soundcheckThis skill uses the workspace's default tool permissions.
Prevents confidential data from leaking through LLM inputs or outputs. LLMs may memorize,
Audits AI-generated code and LLM applications for security vulnerabilities, covering OWASP Top 10 for LLMs, secure coding patterns, and AI-specific threat models.
Assesses privacy risks in LLM outputs including training data memorisation, PII leakage, prompt injection, and hallucinated PII. Guides output filtering, guardrails, monitoring.
Provides security patterns for authentication, defense-in-depth, input validation, OWASP Top 10, LLM safety, and PII masking. Useful for auth flows, sanitization, vulnerability prevention, prompt injection defense, and data redaction.
Share bugs, ideas, or general feedback.
Prevents confidential data from leaking through LLM inputs or outputs. LLMs may memorize, echo, or inference-time expose PII, credentials, and business secrets embedded in prompts — to current users, future users, or via model extraction.
system_prompt = f"User record: {json.dumps(user)}" — full user object (SSN, DOB, email) in promptOPENAI_API_KEY or DB passwords hardcoded or interpolated into system promptsFlag the vulnerable code and explain the risk. Then suggest a fix that establishes these properties:
Anchor — shape, not implementation:
safe_q = redact(user_question)
prompt = f"Answer for user_id={user.id}: {safe_q}" # reference, not record
raw = call_llm(system=DEV_INSTRUCTIONS, user=prompt)
return redact(raw) # every return site
Confirm these properties hold for every relevant pattern present in the code under review (each criterion applies only when its pattern is actually present):