Scores translated and localized content quality across accuracy, cultural adaptation, brand voice preservation, formatting, and compliance. Outputs composite score classifying as publish, review, or re-translate.
From digital-marketing-pronpx claudepluginhub indranilbanerjee/digital-marketing-pro --plugin digital-marketing-proThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Score translated or localized content across multiple quality dimensions to determine whether it is ready for publishing, needs native speaker review, or requires re-translation. Combines technical translation scoring (length ratios, formatting preservation, placeholder integrity, key term consistency) with content quality evaluation, brand voice consistency checking, and market-specific compliance verification into a single composite multilingual quality score.
Use this command after any translation or localization workflow to validate quality before content goes live. It replaces subjective "looks good" assessments with a structured, repeatable scoring methodology that catches issues automated translation often introduces — brand voice drift, formatting damage, missing do-not-translate terms, compliance gaps in the target market, and length distortion that signals missing or added content. The composite score provides a clear publish/review/re-translate classification so the team knows exactly what action to take.
The user must provide (or will be prompted for):
en-US, en-GB, de-DE). Defaults to the brand's primary language from the language configuration if not specifiedde-DE, fr-FR, hi-IN, ja-JP). Required — determines which compliance rules apply and which translation service benchmarks to referencelanguage.do_not_translate list. Additional terms can be provided to supplement the brand list for this specific scoring runblog, email, ad, landing_page, social, product_description, legal, technical. Affects content quality scoring weights and brand voice expectations. Defaults to auto-detection based on content structure~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand voice dimensions, compliance rules for target markets (skills/context-engine/compliance-rules.md), and industry context. Load the language configuration — specifically the do-not-translate terms from language.do_not_translate and any translation quality baselines from past scoring runs. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load restrictions and voice-and-tone rules that apply across all languages. Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.language-router.py --action score with the original content, translated content, source language, target language, and do-not-translate terms. This produces four sub-scores: length ratio (translated content length versus expected length for the language pair — e.g., German typically runs 20-30% longer than English, Japanese typically runs 10-20% shorter; deviations beyond expected ranges indicate missing or added content), formatting preservation (markdown structure, HTML tags, merge tags like {{first_name}}, UTM parameters, and link structures survived translation intact), key term consistency (every do-not-translate term from the brand profile and any additional specified terms appears exactly as specified in the translation), and placeholder integrity (all variables, template tokens, and dynamic content markers are present and correctly positioned in the translated version).eval-runner.py --action run-quick on the translated content alone, scoring it as standalone content in the target language. This assesses structural quality, readability for the target audience, completeness, and coherence — catching cases where a translation is technically accurate but reads poorly as native content. The eval-runner scores content against the plugin's standard quality dimensions regardless of whether it was translated or originally authored.brand-voice-scorer.py --brand {slug} --text "{translated_content}" to score how well the translated content matches the brand's voice profile. Brand voice should survive translation — the brand should sound recognizably like itself in every language, adapted for local expectations but maintaining its core personality dimensions (formality, energy, humor, authority). Score the translated content against the same voice dimensions as the original to detect voice drift introduced during translation.skills/context-engine/compliance-rules.md. For EU languages: GDPR consent language, cookie consent, right-to-erasure references. For hi-IN and other Indian languages: DPDPA compliance. For pt-BR: LGPD. For ko-KR: PIPA. For ja-JP: APPI. For en-US: CAN-SPAM, CCPA/CPRA where applicable. Verify that required compliance elements are present and correctly localized in the translated content — not just copied in English. Score as compliant, partially compliant (elements present but not fully localized), or non-compliant (required elements missing).A structured multilingual quality scorecard containing: