Apply automated 3-framework scoring to all content variations
Automates 3-framework scoring (Gap Selling, Biases, Decision) for all content variations, calculating total scores out of 30 and marking pass/fail against quality thresholds. Use to filter weak content before publication.
/plugin marketplace add rpiplewar/shipfaster/plugin install content-gen@rapid-shippingApply automated framework-based scoring (Gap Selling + Munger Biases + Decision Framework) to all content variations in content-drafts.md, calculating total scores with detailed breakdowns.
Follow the Scorer agent instructions (agents/scorer.md) to:
Location: /home/rpiplewar/fast_dot_ai/poasting/content-drafts.md
Identify:
[To be filled by Scorer agent] placeholdersFor each content piece, calculate:
Total Score = Gap (0-10) + Biases (0-10+) + Decision (0-10)
Quality Thresholds:
Replace [To be filled by Scorer agent] with:
**Scores:**
- Gap Selling: X/10 (Problem: X/3, Impact: X/3, Solution: X/4)
- Biases Activated: Y (List: Bias1, Bias2, Bias3...)
- Decision Framework: Z/10 (Hook: X/3, Value: X/4, CTA: X/3)
- **TOTAL: XX/30** {✅ PASS or ❌ FAIL}
Before marking scoring complete:
✅ Scoring Complete
Variations Scored: 25 (5 themes × 5 variations)
Score Distribution:
- EXCELLENT (28-30): 3 pieces
- GOOD (25-27): 8 pieces
- PASS (20-24): 10 pieces
- FAIL (< 20): 4 pieces
Pass Rate: 84% (21/25)
Highest Scoring:
1. Theme: First Money From Code, Variation 1 (Bold Statement) - 28/30
2. Theme: The Quit Day, Variation 5 (Lollapalooza) - 27/30
3. Theme: Personal Pain → Product, Variation 2 (Story Hook) - 26/30
Output File: content-drafts.md (updated with all scores)
Next Step: Run /content-critic-review for quality feedback
If scoring logic unclear:
agents/scorer.mdIf scores seem inaccurate:
Goal: Within ±2 points (10% margin) of manual expert evaluation
If accuracy drifts:
After successful scoring:
/content-critic-review to get improvement suggestions/content-full-pipeline