npx claudepluginhub agricidaniel/claude-emailThis skill is limited to using the following tools:
Performs comprehensive email deliverability auditing for a domain. Checks DNS authentication records (SPF, DKIM, DMARC), infrastructure (MX, PTR, TLS), reputation (blacklists), and bulk sender compliance. Generates a health score (0-100) with prioritized fixes.
Investigates email inbox placement, sender reputation, and compliance signals like SPF/DKIM/DMARC. Use for placement drops, spam complaints, warmups, or audits.
Audits domain SPF, DKIM, DMARC DNS records using dnspython to verify email authentication configs. Validates syntax, selectors, policies; flags spoofing risks; suggests fixes.
Conducts SMTP server pentests: discovers services with nmap, grabs banners, enumerates users via smtp-user-enum, tests open relays, brute-forces auth with hydra, and suggests hardening.
Share bugs, ideas, or general feedback.
Performs comprehensive email deliverability auditing for a domain. Checks DNS authentication records (SPF, DKIM, DMARC), infrastructure (MX, PTR, TLS), reputation (blacklists), and bulk sender compliance. Generates a health score (0-100) with prioritized fixes.
rankenstein.cloud, example.comWhat to check:
TXT <domain>v=spf1-all (pass), ~all (softfail), ?all (neutral), +all (fail)Commands:
dig txt <domain> +short | grep "v=spf1"
# or
python scripts/check_deliverability.py <domain> --spf
Scoring:
-all): 100 points~all): 70 pointsWhat to check:
google, default, selector1, selector2, k1, mandrill, dkimCommands:
dig txt google._domainkey.<domain> +short
dig txt default._domainkey.<domain> +short
dig txt selector1._domainkey.<domain> +short
# Check common selectors
Note: DKIM selectors are not discoverable without prior knowledge. Check common ones and ask user if their email provider uses a specific selector.
Scoring:
What to check:
TXT _dmarc.<domain>p=reject (excellent), p=quarantine (good), p=none (monitoring)rua=) tag presentruf=) tag present (optional)aspf= (SPF) and adkim= (DKIM) - relaxed vs strictpct=) should be 100 for full enforcementCommands:
dig txt _dmarc.<domain> +short
Scoring:
p=reject + rua + pct=100: 100 pointsp=quarantine + rua: 80 pointsp=none + rua: 40 pointsp=none without reporting: 20 pointsWhat to check:
Commands:
dig mx <domain> +short
dig a <mx-hostname> +short
Scoring:
What to check:
Commands:
dig -x <mx-ip> +short
Scoring:
What to check:
Commands:
openssl s_client -starttls smtp -connect <mx-hostname>:25 -brief
Note: This may require network access. If not available, note as "Unable to verify".
Scoring:
What to check:
Commands:
# Use checkdmarc library if available
python -c "import checkdmarc; print(checkdmarc.check_domains(['<domain>']))"
# Or manual checks
dig <ip>.zen.spamhaus.org +short
dig <ip>.b.barracudacentral.org +short
Scoring:
Critical: Any listing on major blacklists severely impacts deliverability.
Applies to: Domains sending 5,000+ emails/day to Gmail, Yahoo, Microsoft recipients.
Requirements (Google/Yahoo/Microsoft 2024-2026 rules):
p=none with alignmentList-Unsubscribe-Post: One-Click)Scoring:
Bonus points for:
TXT default._bimi.<domain>https://mta-sts.<domain>/.well-known/mta-sts.txtTXT _smtp._tls.<domain> (TLS reporting)Commands:
dig txt default._bimi.<domain> +short
curl https://mta-sts.<domain>/.well-known/mta-sts.txt
dig txt _smtp._tls.<domain> +short
Scoring:
Formula:
Total Score = (SPF × 0.10) + (DKIM × 0.15) + (DMARC × 0.15) + (MX × 0.10) +
(PTR × 0.05) + (TLS × 0.10) + (Blacklists × 0.20) +
(Bulk Compliance × 0.10) + (Extras × 0.05)
Score Interpretation:
| Score | Rating | Status | Action |
|---|---|---|---|
| 90-100 | Excellent | All critical checks pass, fully compliant | Monitor regularly |
| 75-89 | Good | Minor issues, generally deliverable | Fix medium priority items |
| 60-74 | Fair | Issues that could impact inbox placement | Fix high priority items within 1 week |
| 40-59 | Poor | Significant deliverability risks | Fix critical items immediately |
| 0-39 | Critical | Major issues, emails likely going to spam | Emergency fixes required |
Ask user for:
Spawn these agents in parallel for faster auditing:
Agent 1: email-deliverability
Check DNS authentication records for <domain>:
- SPF record validation
- DKIM record discovery (selectors: google, default, selector1, selector2, k1)
- DMARC policy analysis
- MX record validation
- PTR/reverse DNS check
Return JSON with pass/fail status and raw records.
Agent 2: email-compliance
Check bulk sender compliance for <domain>:
- Verify both SPF and DKIM pass
- Check DMARC alignment
- Note TLS support
- Check for List-Unsubscribe headers (if sample email provided)
Return compliance checklist with met/not met status.
Agent 3: email-reputation (if tools available)
Check reputation for <domain>:
- Blacklist status (Spamhaus, Barracuda, SORBS, SpamCop)
- Historical deliverability issues
- Spam complaint rate (if available)
Return list of blacklist hits and reputation score.
Collect results from all agents and calculate weighted health score.
Categorize issues by priority:
Critical (Fix Immediately):
High (Fix Within 1 Week):
~all instead of -all)p=none (upgrade to p=quarantine or p=reject)Medium (Fix Within 1 Month):
rua tag)Provide exact DNS records to add/update with copy-paste ready values.
Structure the audit report as:
## Email Deliverability Audit: [domain] with date, health score, ratingUse score weights from Health Score Calculation section. Use status badges: ✅ PASS, ⚠️ WARN, ❌ FAIL.
Run python scripts/check_deliverability.py <domain> --json for automated DNS checks.
Manual DNS commands (used in audit categories above):
dig txt <domain> +shortdig txt <selector>._domainkey.<domain> +shortdig txt _dmarc.<domain> +shortdig mx <domain> +shortdig -x <ip> +shortopenssl s_client -starttls smtp -connect <mx-hostname>:25 -briefBefore delivering audit results:
If DNS queries fail:
If DKIM selector unknown:
If no MX records found:
Load on demand:
references/deliverability-rules.md - Scoring thresholds and compliance rulesreferences/mcp-integration.md - Provider-specific setup and DNS configurationreferences/compliance.md - Compliance rules and regulatory requirementsAudit is successful when: