Runs unified pre-publish quality gate on marketing content: hallucination detection, claim verification, brand voice scoring, structure validation. Invoke before publishing copy.
npx claudepluginhub indranilbanerjee/digital-marketing-proThis skill is limited to using the following tools:
This skill is the canonical pre-publish gate for marketing content. It wraps the evaluation suite (`scripts/eval-runner.py`) and produces a single pass/fail decision with actionable issues.
Evaluates marketing content quality across six dimensions: content quality, brand voice, hallucination risk, claim verification, structure, readability. Scores drafts, flags issues with fixes before publication.
Reviews AI-generated and human-written marketing content for quality, accuracy, consistency, and effectiveness at tactical, strategic, technical, and brand levels. Activates on QA, review, or proofreading mentions.
Reviews content against brand voice, style guide, and messaging pillars, flagging deviations by severity with specific fixes. Use for pre-ship drafts, copy audits, or legal claim screening.
Share bugs, ideas, or general feedback.
This skill is the canonical pre-publish gate for marketing content. It wraps the evaluation suite (scripts/eval-runner.py) and produces a single pass/fail decision with actionable issues.
Use this skill before publishing any marketing content — blog posts, ad copy, emails, social posts, landing pages, press releases, or any branded copy.
In v3.0 and earlier, a global PreToolUse hook auto-ran a hallucination + brand-compliance check on every Write/Edit operation in every project. v3.1 removed that hook because it fired globally across all plugins and projects (Slack writes, GitHub PRs, code edits — all of it), causing friction in non-marketing work.
/dm:check replaces that automatic gate with an explicit user-invoked gate. The work is the same; the trigger is intentional.
The check delegates to scripts/eval-runner.py (the master eval orchestrator) which calls four sibling scripts:
| Dimension | Script | What it checks |
|---|---|---|
| Hallucination | hallucination-detector.py | Unattributed statistics, placeholder URLs (example.com / your-site.com), unsupported superlatives ("best", "#1", "leading"), fabricated citations |
| Claims | claim-verifier.py (when --evidence provided) | Cross-checks specific claims against a user-provided evidence file |
| Brand voice | brand-voice-scorer.py (when --brand provided) | Scores content against the active brand's voice profile (formality, energy, humor, authority, prefer/avoid words) |
| Structure | output-validator.py (when --schema provided) | Validates content matches expected schema (blog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan) |
Plus content quality and readability scoring (always run).
/dm:check <file-path-or-content>
Runs the quick eval: hallucination detection + content quality + readability. Fast (~2 seconds), zero external dependencies. Use this for routine checks.
/dm:check <file-path-or-content> --full
Runs all 6 dimensions: hallucination + claims (if evidence provided) + brand voice (if brand provided) + structure (if schema provided) + content quality + readability. Use before publishing anything client-facing or external.
/dm:check <file-path-or-content> --compliance --brand <slug> [--evidence <path>] [--schema <name>]
Runs hallucination + claims + brand voice + structure. Best for regulated industries (healthcare, financial services, alcohol, cannabis, gambling) where claim substantiation and brand-voice fidelity matter most.
/dm:check <file-path> --evidence <evidence-file.json>
When the content makes specific claims you want to substantiate, provide a JSON evidence file:
{
"evidence": [
{
"claim": "50% increase in conversions",
"source": "GA4 Q4 report",
"date": "2025-12-31",
"verified": true
},
{
"claim": "Trusted by Fortune 500 companies",
"source": "Customer roster (internal)",
"date": "2026-04-01",
"verified": true
}
]
}
The check will extract every claim from the content and flag any that don't match an evidence entry.
/dm:check <file-path> --schema blog_post
Validates the content matches the structural requirements of the named schema. Available schemas: blog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan. Use --schema list to see all schemas with their requirements.
/dm:check <file-path> --brand acme
Scores the content against the brand voice profile at ~/.claude-marketing/brands/acme/profile.json. Reports per-dimension breakdown (formality, energy, humor, authority) plus deviation from prefer/avoid word lists.
The check returns a unified report:
DM CHECK REPORT — <file or content snippet>
=============================================
Composite Score: 73.4 / 100 (Grade: B-)
Auto-Reject: NO
Dimensions:
Hallucination ............ 96/100 PASS (weight 0.40)
Content Quality .......... 78/100 PASS (weight 0.35)
Readability .............. 65/100 PASS (weight 0.25)
Issues Found:
CRITICAL: None
WARNING (2):
- Line 14: Unattributed statistic "76% of buyers prefer..."
Suggestion: cite source or rephrase as observation
- Line 22: Superlative "best in class" without substantiation
Suggestion: replace with measurable claim or proof point
Decision: PASS — safe to publish but address WARNINGs first
If any CRITICAL issue is found, decision = BLOCKED and the user is asked to fix before publishing.
The skill follows this flow:
--brand not specified, attempt to load from active brand at ~/.claude-marketing/brands/_active-brand.json. If --schema not specified, infer from content type if obvious (blog markdown → blog_post, etc.) or skip structure check.run-quick (default), run-full (with --full), run-compliance (with --compliance).python ${CLAUDE_PLUGIN_ROOT}/scripts/eval-runner.py --action run-quick --file <input> [--brand <slug>] [--evidence <path>] [--schema <name>]
scripts/eval-runner.py — master orchestratorscripts/hallucination-detector.py — invoked by eval-runnerscripts/claim-verifier.py — invoked by eval-runner if --evidence providedscripts/brand-voice-scorer.py — invoked by eval-runner if --brand providedscripts/output-validator.py — invoked by eval-runner if --schema providedscripts/content-scorer.py — invoked by eval-runnerscripts/readability-analyzer.py — invoked by eval-runnerAll scripts use stdlib only (except brand-voice-scorer which optionally uses nltk). No external API calls, no internet required.
User: /dm:check drafts/q2-launch-blog.md
Skill:
1. Read drafts/q2-launch-blog.md
2. Run python scripts/eval-runner.py --action run-quick --file drafts/q2-launch-blog.md
3. Parse JSON output:
composite_score: 81.2, grade: B+, auto_rejected: false
hallucination: 92/100 pass, content_quality: 76/100 pass, readability: 84/100 pass
alerts: 1 warning ("unattributed stat in line 14")
4. Format report:
DM CHECK REPORT — drafts/q2-launch-blog.md
============================================
Composite Score: 81.2 / 100 (Grade: B+)
Decision: PASS
Dimensions:
Hallucination ......... 92/100 pass
Content Quality ....... 76/100 pass
Readability ........... 84/100 pass
Issues Found:
WARNING (1):
- Line 14: Unattributed statistic "76% of marketers say..."
Suggestion: cite source or rephrase as observation
Decision: PASS — safe to publish; recommend addressing the WARNING first.
User: /dm:check drafts/healthcare-ad.md --full --brand healthfirst --evidence facts/q2-claims.json --schema ad_copy
Skill:
1. Read drafts/healthcare-ad.md
2. Run python scripts/eval-runner.py --action run-full --file drafts/healthcare-ad.md --brand healthfirst --evidence facts/q2-claims.json --schema ad_copy
3. Parse JSON output. Composite: 58.4, grade: D+, auto_rejected: true
4. Format report with CRITICAL issues highlighted
5. Decision: BLOCKED. Two unattributed health claims need substantiation before this can publish.
User: /dm:check drafts/financial-services-landing.md --compliance --brand finadvisor --evidence facts/finra-disclosures.json
Skill:
1. Read content
2. Run python scripts/eval-runner.py --action run-compliance --file drafts/financial-services-landing.md --brand finadvisor --evidence facts/finra-disclosures.json
3. Output prioritises hallucination + claim verification + brand voice + structure
4. Returns decision with FINRA-relevant issues highlighted
User: /dm:check "Our amazing product boosts conversion by 347% — visit example.com today!"
Skill:
1. Detect inline content (not a file path)
2. Write content to a temp file
3. Run quick eval
4. Report:
CRITICAL: 2
- Placeholder URL "example.com" — replace with real URL before publishing
- Unattributed statistic "347%" — fabricated stat or missing citation
Decision: BLOCKED
| Scenario | Recommended mode |
|---|---|
| Routine content check during drafting | /dm:check <file> (quick) |
| Before publishing any external content | /dm:check <file> --full --brand <slug> |
| Regulated industry content (healthcare / financial / alcohol / cannabis / gambling) | /dm:check <file> --compliance --brand <slug> --evidence <facts> |
| Client-facing deliverable (Growth Plan, Yearly Planner, monthly report) | /dm:check <file> --full --brand <slug> |
| Ad copy specifically | /dm:check <file> --schema ad_copy --brand <slug> |
| Email specifically | /dm:check <file> --schema email --brand <slug> |
| Blog post specifically | /dm:check <file> --schema blog_post --brand <slug> |
~/.claude-marketing/brands/_active-brand.json. If no active brand, run without --brand (skip brand voice dimension).--evidence or --schema, note that the corresponding dimensions were skipped./dm:engagement growth-plan — produces Part 8 deliverable; should be checked with /dm:check --full --schema content_brief before client delivery/dm:content-engine — produces marketing content; recommended workflow is /dm:content-engine → review → /dm:check → publish/dm:eval-content — older legacy alias that will route to this skill in v3.2+scripts/eval-runner.py — the master orchestrator this skill wrapsskills/context-engine/eval-framework-guide.md — full eval framework documentationskills/context-engine/eval-rubrics.md — per-dimension scoring rubricsdocs/architecture.md Section 11 — eval framework architecture