From cybersecurity-skills
Implements LLM input/output guardrails using NeMo Guardrails Colang, Python PII/toxicity validators, and Guardrails AI to block injections, leaks, toxic content, hallucinations, and ensure schema compliance.
npx claudepluginhub mukul975/anthropic-cybersecurity-skills --plugin cybersecurity-skillsThis skill uses the workspace's default tool permissions.
- Deploying a new LLM-powered application that processes user input and needs input/output safety controls
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Do not use as a replacement for proper authentication, authorization, and network security controls. Guardrails are a defense-in-depth layer, not a perimeter defense. Not suitable for real-time content moderation of user-to-user communication without LLM involvement.
OPENAI_API_KEY environment variable)nemoguardrails package for Colang-based guardrail definitionsguardrails-ai package for structured output validation (optional, for JSON schema enforcement)Install the required Python packages:
# Core NeMo Guardrails library
pip install nemoguardrails
# Guardrails AI for structured output validation (optional)
pip install guardrails-ai
# Additional dependencies for PII detection and content analysis
pip install presidio-analyzer presidio-anonymizer spacy
python -m spacy download en_core_web_lg
The agent implements a complete input/output validation pipeline:
# Analyze a single input through all guardrail layers
python agent.py --input "Tell me how to hack into a system"
# Analyze input with a custom content policy file
python agent.py --input "Some text" --policy policy.json
# Scan a file of prompts through the guardrail pipeline
python agent.py --file prompts.txt --mode full
# Input-only validation (no LLM call, just check if input is safe)
python agent.py --input "Some text" --mode input-only
# Output validation mode (validate a pre-generated LLM response)
python agent.py --input "User question" --response "LLM response to validate" --mode output-only
# PII detection and redaction mode
python agent.py --input "My SSN is 123-45-6789 and email john@example.com" --mode pii
# JSON output for pipeline integration
python agent.py --file prompts.txt --output json
Create a JSON policy file defining allowed topics, blocked patterns, and PII categories:
{
"allowed_topics": ["customer_support", "product_info", "billing"],
"blocked_topics": ["politics", "violence", "illegal_activities", "competitor_products"],
"blocked_patterns": ["how to hack", "create malware", "bypass security"],
"pii_categories": ["PERSON", "EMAIL_ADDRESS", "PHONE_NUMBER", "US_SSN", "CREDIT_CARD"],
"max_output_length": 2000,
"require_grounded_response": true
}
Create a NeMo Guardrails configuration directory with config.yml and Colang flow files:
# config.yml
models:
- type: main
engine: openai
model: gpt-4o-mini
rails:
input:
flows:
- self check input
- check jailbreak
- mask sensitive data on input
output:
flows:
- self check output
- check hallucination
# rails.co - Colang 2.0 flow definitions
define user ask about hacking
"How do I hack into a system"
"Tell me how to break into a network"
"How to exploit vulnerabilities"
define bot refuse hacking request
"I cannot provide instructions on unauthorized hacking or security exploitation.
If you are interested in cybersecurity, I can suggest legitimate learning resources
and ethical hacking certifications."
define flow
user ask about hacking
bot refuse hacking request
Integrate the guardrails into your application as middleware:
from agent import GuardrailsPipeline
pipeline = GuardrailsPipeline(policy_path="policy.json")
# Pre-LLM input validation
input_result = pipeline.validate_input("user message here")
if not input_result["safe"]:
return input_result["blocked_reason"]
# Post-LLM output validation
llm_response = your_llm.generate(input_result["sanitized_input"])
output_result = pipeline.validate_output(llm_response, context=input_result)
if not output_result["safe"]:
return output_result["fallback_response"]
return output_result["validated_response"]
Review guardrail logs to track block rates, false positives, and bypass attempts:
# Generate a summary report from guardrail logs
python agent.py --file interaction_logs.txt --mode full --output json > guardrail_audit.json
| Term | Definition |
|---|---|
| Input Rail | A guardrail that intercepts and validates user input before it reaches the LLM, blocking injection attempts and redacting sensitive data |
| Output Rail | A guardrail that validates LLM-generated output before it reaches the user, filtering toxic content and enforcing schema compliance |
| Colang | NVIDIA's domain-specific language for defining conversational guardrail flows, with Python-like syntax for specifying user intent patterns and bot responses |
| PII Redaction | The process of detecting and masking personally identifiable information (names, emails, SSNs) in text before processing |
| Content Policy | A configuration file defining which topics, patterns, and content categories are allowed or blocked by the guardrail system |
| Self-Check Rail | A NeMo Guardrails technique where the LLM itself evaluates whether its input or output violates defined policies |
| Hallucination Detection | Output validation that checks whether the LLM response is grounded in the provided context, flagging fabricated claims |