Score and review existing narrative files against story arc quality gates. This skill should be used when the user asks to 'review a narrative', 'score a narrative', 'check narrative quality', 'validate narrative', 'audit narrative', 'grade a narrative', 'evaluate narrative quality', 'narrative scorecard', 'rate my narrative', 'run quality gates on a narrative', or when the narrative-reviewer agent evaluates a generated narrative.
From cogni-narrativenpx claudepluginhub cogni-work/insight-wave --plugin cogni-narrativeThis skill is limited to using the following tools:
references/scoring-rubric.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Evaluate an existing narrative markdown file against the cogni-narrative quality gates. Produce a structured scorecard with pass/warn/fail per gate, an overall score (0-100), and the top 3 actionable improvement suggestions.
Not for:
cogni-narrative:narrative skill instead)cogni-narrative:narrative-adapt skill instead)| Parameter | Required | Description |
|---|---|---|
--source-path | Yes | Path to the narrative .md file to review |
--arc-id | No | Override arc detection (uses frontmatter arc_id by default) |
--language | No | Override language detection (uses frontmatter language by default) |
Two outputs:
{source-dir}/narrative-review.md{
"success": true,
"source_path": "insight-summary.md",
"arc_id": "corporate-visions",
"overall_score": 82,
"grade": "B",
"gates": {
"structural": "pass",
"critical": "pass",
"evidence": "warn",
"structure": "pass",
"language": "pass"
},
"top_improvements": [
"Add 3 more citations to reach minimum 15 (currently 12)",
"Expand 'Why Now' section by ~40 words to meet its 21% element allocation (~299 words at default T=1675)",
"Add citation to uncited quantitative claim in paragraph 3 of 'Why Change'"
]
}
| Score | Grade | Meaning |
|---|---|---|
| 90-100 | A | Publication-ready, all gates pass |
| 80-89 | B | Strong, minor improvements possible |
| 70-79 | C | Acceptable, several improvements needed |
| 60-69 | D | Below standard, significant rework needed |
| 0-59 | F | Fails critical gates, major rework required |
--source-pathtitle, subtitle, arc_id, arc_display_name, target_length, word_count, language, date_created, source_file_countarc_id from: explicit parameter > frontmatter > detection failurelanguage from: explicit parameter > frontmatter > default enRead the arc definition to know expected element names, word targets, and quality gates:
../narrative/references/story-arc/arc-registry.md -- for arc metadata../narrative/references/story-arc/{arc_id}/arc-definition.md -- for element definitions and word targets../narrative/references/language-templates.md -- for localized header namesStore the expected element names, proportions, and citation requirements. Read target_length from the narrative's frontmatter to compute expected word ranges. If target_length is absent (legacy narratives), default to 1675. Compute total_lower = target_length * 0.85, total_upper = target_length * 1.15, then per-element ranges: [proportion * total_lower, proportion * total_upper].
Evaluate the narrative against each gate category. Use the scoring rubric in references/scoring-rubric.md.
Gate evaluation order (matches narrative skill Phase 5):
For each gate:
pass / warn / failWrite narrative-review.md to the same directory as the source file:
---
type: narrative-review
source: "{source filename}"
arc_id: "{arc_id}"
overall_score: {0-100}
grade: "{A-F}"
date_reviewed: "{ISO 8601}"
---
# Narrative Review: {source filename}
**Arc:** {arc_display_name} | **Score:** {score}/100 ({grade}) | **Language:** {language}
---
## Gate Results
| Gate | Status | Score | Details |
|------|--------|-------|---------|
| Structural | {pass/warn/fail} | {x}/30 | {summary} |
| Critical | {pass/warn/fail} | {x}/25 | {summary} |
| Evidence | {pass/warn/fail} | {x}/25 | {summary} |
| Structure | {pass/warn/fail} | {x}/10 | {summary} |
| Language | {pass/warn/fail} | {x}/10 | {summary} |
| **Total** | | **{total}/100** | |
---
## Top 3 Improvements
1. {Most impactful improvement with specific action}
2. {Second improvement with specific action}
3. {Third improvement with specific action}
---
## Detailed Analysis
### Structural Gate ({x}/30)
{Detailed findings for each structural criterion}
### Critical Gate ({x}/25)
{Detailed findings for each critical criterion}
### Evidence Gate ({x}/25)
{Detailed findings for each evidence criterion}
### Structure Gate ({x}/10)
{Detailed findings for each structure criterion}
### Language Gate ({x}/10)
{Detailed findings for each language criterion}
Return the JSON summary (see Output section above).
For detailed scoring criteria per gate -- including partial credit rules, counting methods, and edge cases -- load references/scoring-rubric.md.
Gate summary: Structural (30 pts) | Critical (25 pts) | Evidence (25 pts) | Structure (10 pts) | Language (10 pts)
| File | Purpose | Load When |
|---|---|---|
references/scoring-rubric.md | Detailed scoring weights and edge cases | Step 3 |
Cross-skill dependencies (files owned by the narrative skill):
| File | Purpose | Load When |
|---|---|---|
../narrative/references/story-arc/arc-registry.md | Arc metadata and detection algorithm | Step 2 |
../narrative/references/story-arc/{arc_id}/arc-definition.md | Element names and word targets | Step 2 |
../narrative/references/language-templates.md | Localized header names per arc | Step 2 |