Pre-launch quality evaluation and improvement recommendations. Uses evaluation frameworks to systematically score and improve project outputs before public exposure. Saves to .ideas/evaluation-results.md.
From strategy-toolkitnpx claudepluginhub nsalvacao/nsalvacao-claude-code-plugins --plugin strategy-toolkit/strategic-reviewStrategic planning review — 7-30 day horizon with undated task audit
Systematic quality evaluation before public exposure. This command should be run AFTER the project is functionally complete but BEFORE any public launch (HN, Reddit, blog posts, Product Hunt, etc.).
Prevent embarrassing quality gaps from undermining a strong project. A single broken demo, unclear README, or poor-quality generated output can kill launch momentum permanently.
.ideas/brainstorm-expansion.md if it exists — understand strategic goals.ideas/execution-plan.md if it exists — understand quality audit findingsScore every generated/public output on these dimensions:
| Dimension | Score (1-5) | Weight | Criteria |
|---|---|---|---|
| Accuracy | 25% | Does the output correctly represent reality? No hallucinations, wrong data, or stale info? | |
| Completeness | 20% | Does it cover all cases? What percentage of the domain is captured? | |
| Usefulness | 25% | Does it actually help the user accomplish their goal better than alternatives? | |
| Format Quality | 15% | Is it well-structured, consistent, readable, properly formatted? | |
| Freshness | 15% | Is the data current? Are there stale references or outdated information? |
#### [Output Name]
| Dimension | Score | Evidence |
|-----------|-------|----------|
| Accuracy | [1-5] | [specific examples] |
| Completeness | [1-5] | [what's missing] |
| Usefulness | [1-5] | [compared to what alternative?] |
| Format Quality | [1-5] | [specific issues] |
| Freshness | [1-5] | [stale items found] |
| **Weighted Total** | **[/5]** | |
**Top 3 improvements needed:**
1. [specific, actionable improvement]
2. [specific, actionable improvement]
3. [specific, actionable improvement]
If the project generates multiple outputs (e.g., plugins for different CLIs):
### Cross-Output Consistency
| Output | Accuracy | Completeness | Usefulness | Format | Freshness | Total |
|--------|----------|-------------|------------|--------|-----------|-------|
| [output 1] | [score] | ... | ... | ... | ... | [weighted] |
| [output 2] | [score] | ... | ... | ... | ... | [weighted] |
| ...
**Best**: [which and why]
**Worst**: [which and why]
**Systematic bias**: [if any detected]
**Edge case gaps**: [if any found]
The README is the #1 conversion surface. Evaluate:
| Element | Present? | Score (1-5) | Notes |
|---|---|---|---|
| Hook/tagline | Yes/No | Does it grab attention in 5 seconds? | |
| Problem statement | Yes/No | Is the pain point immediately clear? | |
| Solution | Yes/No | Is the value prop obvious? | |
| Demo GIF/video | Yes/No | Does it show the magic in <30 seconds? | |
| Badges | Yes/No | License, CI, version, downloads, language | |
| Quick start (< 3 commands) | Yes/No | Can someone try it in 60 seconds? | |
| Comparison with alternatives | Yes/No | Why this over existing solutions? | |
| Stats/proof | Yes/No | Numbers that prove it works | |
| Installation | Yes/No | Clear, copy-pasteable | |
| Examples | Yes/No | Real-world use cases | |
| Architecture | Yes/No | For those who want to understand internals | |
| Contributing guide | Yes/No | Path for community involvement | |
| License | Yes/No | Clear and visible | |
| Author/credits | Yes/No | Who built this |
Test the actual user journey. Actually run commands, don't just describe them.
--help work and make sense? Run it.--version work? Run it.| Dimension | Score (1-5) | Issues Found |
|---|---|---|
| Installation ease | ||
| Time to first value | ||
| Error handling quality | ||
| Help text quality | ||
| Output quality |
Read 3-5 core source files and look for actual bugs:
Report findings with specific file:line references. This is NOT a full audit — it's a spot-check to catch the most embarrassing issues before a public audience sees the code.
Before public launch, verify:
| Check | Status | Notes |
|---|---|---|
| No hardcoded secrets in repo | PASS/FAIL | |
| Subprocess execution is safe | PASS/FAIL | Check for command injection vectors |
| Dependencies are minimal/audited | PASS/FAIL | |
| License is clear | PASS/FAIL | |
| Author attribution is correct | PASS/FAIL | |
| No accidental PII in outputs | PASS/FAIL | |
| .gitignore covers sensitive files | PASS/FAIL |
Quick validation that the project's positioning holds up:
| Category | Score (1-5) | Weight | Weighted |
|---|---|---|---|
| Output quality | 25% | ||
| README/docs quality | 20% | ||
| Developer experience | 20% | ||
| Security/trust | 15% | ||
| Competitive positioning | 10% | ||
| Test coverage | 10% | ||
| TOTAL | [/5] |
Numbered list of specific, prioritized improvements. Flag any that are BLOCKERS (must fix) vs NICE-TO-HAVE (fix if time).
.ideas/evaluation-results.md.ideas/launch-blockers.md with
just the blocking items# [Project] -- Strategic Review (Pre-Launch Evaluation)
**Date:** [today]
**Overall Score:** [X/5] — [GREEN/YELLOW/RED]
**Recommendation:** [Launch / Fix then launch / Not ready]
---
## 1. Output Quality Evaluation
[per-output scoring tables]
## 2. Cross-Output Consistency
[variance analysis]
## 3. README Conversion Audit
[element checklist + score card]
## 4. Developer Experience Audit
[installation + first-use flow]
## 5. Security & Trust Audit
[checklist]
## 6. Competitive Positioning
[search results + comparison]
## 7. Launch Readiness Scorecard
[go/no-go table + decision]
## 8. Action Items
### Blockers (MUST fix)
[numbered list]
### Improvements (SHOULD fix)
[numbered list]
### Nice-to-Have (CAN fix)
[numbered list]
The output document MUST: