Generates quality trends reports from evaluation data, analyzing scores over time, by content type or dimension. Use for reviewing performance, regressions, and improvement recommendations.
From digital-marketing-pronpx claudepluginhub indranilbanerjee/digital-marketing-pro --plugin digital-marketing-proThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Quality intelligence reporting over time. Shows eval score trends across days and weeks, identifies which content types are improving or declining, detects regression alerts where quality has dropped below established baselines, surfaces the brand's best and worst performing content, and provides actionable recommendations for improving content quality across the organization.
This command turns the evaluation data logged by /dm:eval-content into strategic insight. Instead of evaluating a single piece of content, it analyzes the pattern across all evaluations to answer: Is our content quality improving or declining? Which content types are strongest? Which dimensions need the most work? Are there regressions we need to address? What specific changes will have the biggest impact on overall quality?
The user must provide (or will be prompted for):
7d, 14d, 30d, 60d, 90d, or a custom date range (YYYY-MM-DD to YYYY-MM-DD). Defaults to 30 days. Longer periods provide better trend visibility but may include outdated data from before process changesblog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan, or all. Defaults to all types. Useful for drilling into a specific content stream's quality trajectorycontent_quality, brand_voice, hallucination_risk, claim_verification, output_structure, readability, or all. Defaults to all dimensions. Useful when the team is working on improving a specific quality aspect~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand quality standards and industry context for benchmark comparison. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load any quality targets or SLA definitions. Check for agency SOPs at ~/.claude-marketing/sops/ — agency workflows may define minimum quality thresholds for client deliverables. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.scripts/quality-tracker.py --brand {slug} --action get-trends --days {period} to retrieve time-series evaluation data — composite scores and per-dimension scores plotted over the reporting window. If a content type filter is applied, pass --content-type {content_type}. This returns daily and weekly aggregates, moving averages, and trend direction indicators.scripts/quality-tracker.py --brand {slug} --action get-summary --days {period} to retrieve aggregate statistics — total evaluations run, average composite score, grade distribution (how many A's, B's, C's, etc.), pass/fail/review breakdown, and per-dimension averages with standard deviations.scripts/quality-tracker.py --brand {slug} --action check-regression --days {period} to detect statistically significant quality drops. The regression detector compares the most recent 7-day average against the full-period baseline and flags any dimension or content type where quality has declined by more than one standard deviation. Each regression alert includes the severity (minor, moderate, severe), the dimension or content type affected, the baseline value, the current value, and the trend direction.scripts/quality-tracker.py --brand {slug} --action get-best --days {period} --limit 5 and scripts/quality-tracker.py --brand {slug} --action get-worst --days {period} --limit 5 to retrieve the highest and lowest scoring evaluations in the period. These provide concrete examples that illustrate what good and poor quality looks like for this brand.skills/context-engine/eval-rubrics.md for dimension-specific improvement strategies.A structured quality intelligence report containing: