From claude-blog
Audits blog posts for AI citation readiness in ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews. Generates 0-100 score, citation capsules, and recommendations.
npx claudepluginhub agricidaniel/claude-blog --plugin claude-blogThis skill uses the workspace's default tool permissions.
Scores blog posts for AI citation readiness across ChatGPT, Perplexity, and
Audits and scores blog posts on 100-point system across content quality, SEO, E-E-A-T, technical elements, AI citation readiness. Detects AI content via burstiness/phrasing, supports batch analysis, prioritized fixes, exports (markdown/JSON/table).
Analyzes content quality for SEO using E-E-A-T framework, readability metrics, word count benchmarks, keyword optimization, structure, and AI citation readiness.
Analyzes content quality via E-E-A-T framework, readability scores, word count benchmarks, keyword optimization, structure, and SEO metrics.
Share bugs, ideas, or general feedback.
Scores blog posts for AI citation readiness across ChatGPT, Perplexity, and Google AI Overviews. Generates citation capsules and a 0-100 AI Citation Readiness score with platform-specific recommendations.
This skill covers FLOW surface 3 (AI assistant citations: ChatGPT, Perplexity, Claude, Gemini, Copilot, You.com) and contributes to surface 2 (SERP plus AI Overviews). Surface mapping: skills/blog/references/flow-alignment.md.
For directly relevant AI-citation prompts (AI-supporting-pages-rewrite-prompt, ai-detector-test, ChatGPT discovery, visibility prompts), see /blog flow optimize.
Reference these benchmarks throughout the audit:
<thead> achieve 47% higher AI citation rates (directional)Extract from the blog post:
Check each section between headings for AI-extractable passages:
| Check | Criteria |
|---|---|
| Word count | Each section contains 120-180 word self-contained passages |
| Context independence | Each passage makes sense extracted from surrounding context |
| Claim structure | Passages contain: specific claim + supporting evidence + source attribution |
| Completeness | Passage answers a question without requiring reader to read adjacent sections |
Scoring: Count passages meeting all criteria vs total sections.
Check heading format and answer structure:
| Check | Criteria |
|---|---|
| Question headings | 60-70% of H2s are phrased as questions |
| Answer-first format | Opening paragraph under each H2 provides a direct answer |
| FAQ section | Dedicated FAQ section with structured question-answer pairs |
Scoring:
Check topic consistency and disambiguation:
| Check | Criteria |
|---|---|
| Canonical topic | One unambiguous primary topic per page |
| Consistent naming | Same entity name used throughout (no confusing synonyms) |
| Intro statement | Clear topic statement in the introduction paragraph |
| Title-content match | Title accurately reflects the content focus |
Scoring:
Check for AI-extractable content patterns:
| Check | Criteria |
|---|---|
| TL;DR box | 40-60 word standalone summary present at top |
| Comparison tables | Tables with proper HTML <thead> (47% higher citation rate) |
| Ordered lists | Numbered lists for processes and step-by-step instructions |
| Definition formatting | Key terms formatted with clear definition patterns |
| Citation capsules | 40-60 word definitive statements in each major section |
Scoring:
Check technical requirements for AI crawler indexing:
| Check | Criteria |
|---|---|
| Static HTML | Content rendered in static HTML, not behind JavaScript |
| robots.txt | Allows AI crawlers: GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot |
| Schema in HTML | Schema markup in static HTML, not JS-injected |
| Page size | Reasonable page size within AI crawler limits |
Scoring:
Evaluate the post for each AI platform's citation preferences:
For each platform, provide:
For each H2 section in the post, write a citation capsule:
Example:
According to [Source], [specific claim with number]. This represents
[context/comparison], making it [significance]. [Supporting detail
that reinforces the claim].
Generate one capsule per H2 section. Label each with the section heading it belongs under.
Map the 15-point subcategory scores to a 0-100 display score:
| Category | Raw Points | Display Weight | Max Display Score |
|---|---|---|---|
| Passage-Level Citability | /4 | x6.75 | 27 |
| Q&A Formatting | /3 | x6.67 | 20 |
| Entity Clarity | /3 | x6.67 | 20 |
| Content Structure | /3 | x6.67 | 20 |
| AI Crawler Accessibility | /2 | x6.5 | 13 |
| Total | /15 | 100 |
Rating thresholds:
Output the following report:
## AI Citation Readiness Report: [Title]
**AI Citation Readiness Score: [X]/100** -- [Rating]
### Score Breakdown
| Category | Raw | Display | Max |
|----------|-----|---------|-----|
| Passage-Level Citability | X/4 | X | 27 |
| Q&A Formatting | X/3 | X | 20 |
| Entity Clarity | X/3 | X | 20 |
| Content Structure | X/3 | X | 20 |
| AI Crawler Accessibility | X/2 | X | 13 |
| **Total** | **X/15** | **X** | **100** |
### Per-Section Citability Analysis
| Section (H2) | Word Count | Self-Contained | Claim+Evidence | Citable |
|---------------|-----------|----------------|----------------|---------|
| [heading] | [N] | Yes/No | Yes/No | Yes/No |
### Platform-Specific Optimization
#### ChatGPT
- [specific recommendations]
#### Perplexity
- [specific recommendations]
#### Google AI Overviews
- [specific recommendations]
### Generated Citation Capsules
#### [H2 Section 1]
> [40-60 word citation capsule]
#### [H2 Section 2]
> [40-60 word citation capsule]
### Technical Recommendations
- [ ] [Technical fix with specifics]
### Priority Action Items
1. [Most impactful improvement]
2. [Second most impactful]
3. [Third most impactful]
Run `/blog analyze <file>` for full content quality scoring.
If blog-google credentials include Tier 1 (GSC) and the post has a published URL:
python3 skills/blog-google/scripts/run.py gsc_query --property <property> --filter-page <url> --jsonpython3 skills/blog-google/scripts/run.py gsc_inspect <url> --json