Configures brand content evaluation settings: adjust score thresholds, dimension weights (e.g., hallucination risk, brand voice), auto-reject rules, and content-type overrides. Use when tuning quality standards.
From digital-marketing-pronpx claudepluginhub indranilbanerjee/digital-marketing-pro --plugin digital-marketing-proThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Configure the evaluation system for a brand. Set minimum quality thresholds per dimension, adjust scoring weights based on industry priorities and content strategy, configure auto-reject thresholds that prevent substandard content from passing evaluation, and define content-type-specific quality standards that apply different bars to different formats.
The eval config determines how strictly content is scored and what the quality bar looks like for the brand. A healthcare company may weight hallucination risk and claim verification heavily while relaxing readability thresholds for technical audiences. A consumer brand may prioritize brand voice and readability while accepting lighter claim verification for awareness content. An agency managing multiple brands can set different configs per brand. This command makes those trade-offs explicit and adjustable rather than buried in defaults.
The user must provide (or will be prompted for):
view (show current settings), set-threshold (change a minimum score for a dimension), set-weights (change dimension weight distribution), set-auto-reject (change the composite score below which content automatically fails), set-content-type (configure content-type-specific overrides), recommend (get industry-appropriate settings suggestions), or reset (restore all settings to defaults)content_quality, brand_voice, hallucination_risk, claim_verification, output_structure, readability, or composite{"content_quality": 0.25, "brand_voice": 0.20, "hallucination_risk": 0.20, "claim_verification": 0.15, "output_structure": 0.10, "readability": 0.10}. Weights must sum to approximately 1.0 (tolerance of +/- 0.02 for rounding)~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply industry context for recommendation generation — different industries have different quality priorities. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, note any quality requirements defined in guidelines that should inform threshold recommendations. Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.scripts/eval-config-manager.py --brand {slug} --action get-config to retrieve all current settings — global thresholds, dimension weights, auto-reject threshold, and any content-type-specific overrides. Identify which settings are custom (set by the user) and which are defaults.scripts/eval-config-manager.py --brand {slug} --action set-threshold --dimension {dimension} --value {threshold}. Show before/after comparison with the impact on scoring strictnessscripts/eval-config-manager.py --brand {slug} --action set-weights --weights '{weights_json}'. Show before/after comparison with an example of how the same content would score differently under old vs. new weightsscripts/eval-config-manager.py --brand {slug} --action set-auto-reject --value {score}. Show the impact — how many of the brand's recent evaluations would have been auto-rejected under the new threshold vs. the old onescripts/eval-config-manager.py --brand {slug} --action set-content-type --type {content_type} --overrides '{overrides_json}'. Show how this content type's effective config now differs from the global configskills/context-engine/eval-framework-guide.md for industry-specific recommendations. Present suggestions with rationale — e.g., "Healthcare brands should weight hallucination risk at 0.25+ because unverified health claims carry regulatory risk"scripts/eval-config-manager.py --brand {slug} --action reset-config. Show what changes from the current custom config back to defaults and confirm before executingA structured configuration report containing: