Evaluates marketing content quality across six dimensions: content quality, brand voice, hallucination risk, claim verification, structure, readability. Scores drafts, flags issues with fixes before publication.
From digital-marketing-pronpx claudepluginhub indranilbanerjee/digital-marketing-pro --plugin digital-marketing-proThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Comprehensive content evaluation using the full eval pipeline. Runs content through six scoring dimensions — content quality, brand voice, hallucination risk, claim verification, output structure, and readability — to produce a composite score with letter grade, flag specific issues with fix suggestions, and compare against brand quality baselines. This is the go-to command before any content goes to publication, client review, or campaign launch.
Every evaluation is logged to the quality tracker so regression detection, trend analysis, and brand-level quality reporting work continuously. If the brand has custom thresholds or dimension weights configured via /dm:eval-config, those are applied automatically — otherwise industry-standard defaults are used.
The user must provide (or will be prompted for):
blog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan, or custom. If omitted, the eval runner auto-detects based on content structure and length. Content type determines which built-in schema is used for structure validation and which readability benchmarks apply[{"claim": "...", "source": "...", "date": "...", "verified": true}]. If not provided, claim verification runs in extraction-only mode and flags all specific claims as "unverified — evidence recommended"~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand voice, compliance rules for target markets (skills/context-engine/compliance-rules.md), and industry context. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load restrictions and relevant category files (especially messaging.md for voice scoring and visual-identity.md for format standards). Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.scripts/eval-config-manager.py --brand {slug} --action get-config to retrieve brand-specific thresholds, dimension weights, and auto-reject rules. If no custom config exists, use defaults from skills/context-engine/eval-framework-guide.md. Note which settings are custom vs. default in the output.scripts/eval-runner.py --brand {slug} --action run-full --text "{content}" --content-type {content_type} with optional --evidence {evidence_file} and --schema {schema_file} flags. This runs all six dimensions:
skills/context-engine/eval-rubrics.md for dimension-specific fix guidance.scripts/quality-tracker.py --brand {slug} --action get-trends --days 30 to pull the brand's recent quality history. If historical data exists, show how this content's composite score and individual dimension scores compare to the 30-day rolling average — above average, at average, or below average, with the delta. Flag if this content would lower the brand's average.scripts/quality-tracker.py --brand {slug} --action log-eval --content-type {type} --data '{"composite": {score}, "dimensions": {dimension_scores_json}}' to persist the evaluation for trend tracking and regression detection. This step is mandatory — every evaluation must be logged.A structured evaluation report containing: