Audits domain authority with 40-item CITE scoring on citation, impact, trust, entities, plus veto checks, reports, and action plans. For SEO credibility and trust assessments.
From seo-geo-claude-skillsnpx claudepluginhub aaron-he-zhu/seo-geo-claude-skillsThis skill uses the workspace's default tool permissions.
references/example-report.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Based on CITE Domain Rating. Full benchmark reference: references/cite-domain-rating.md
SEO & GEO Skills Library · 20 skills for SEO + GEO · ClawHub · skills.sh System Mode: This cross-cutting skill is part of the protocol layer and follows the shared Skill Contract and State Model.
This skill evaluates domain authority across 40 standardized criteria organized in 4 dimensions. It produces a comprehensive audit report with per-item scoring, dimension and weighted scores by domain type, veto item checks, and a prioritized action plan.
Sister skill: content-quality-auditor evaluates content at the page level (80 items). This skill evaluates the domain behind the content (40 items). Together they provide a complete 120-item assessment.
Namespace note: CITE uses C01-C10 for Citation items; CORE-EEAT uses C01-C10 for Contextual Clarity items. In combined 120-item assessments, prefix with the framework name (e.g., CITE-C01 vs CORE-C01) to avoid confusion.
System role: Citation Trust Gate. It decides whether a domain is credible enough to support ranking, citation, and brand authority work.
Use this when domain credibility or citation trustworthiness is in question — even if the user doesn't use audit terminology:
Start with one of these prompts. Finish with a citation-trust verdict and a handoff summary using the repository format in Skill Contract.
Audit domain authority for [domain]
Run a CITE domain audit on [domain] as a [domain type]
CITE audit for example.com as an e-commerce site
Score this SaaS domain against the 40-item benchmark: [domain]
Compare domain authority: [your domain] vs [competitor 1] vs [competitor 2]
Run full 120-item assessment on [domain]: CITE domain audit + CORE-EEAT content audit on [sample pages]
Gate verdict: TRUSTED (no veto items, scores above threshold) / CAUTIOUS (issues found but no veto) / UNTRUSTED (veto item T03, T05, or T09 failed). Always state the verdict prominently at the top of the report.
Expected output: a CITE audit report, a citation-trust verdict, and a short handoff summary ready for memory/audits/domain/.
memory/audits/domain/.memory/hot-cache.md (auto-saved). Authority context to memory/audits/domain/. Results feed into entity-optimizer as authority input for brand's canonical profile.Next Best Skill below once the trust picture is clear.See CONNECTORS.md for tool category placeholders.
Note: All integrations are optional. This skill works without any API keys — users provide data manually when no tools are connected.
With ~~link database + ~~SEO tool + ~~AI monitor + ~~knowledge graph + ~~brand monitor connected: Automatically pull backlink profiles and link quality metrics from ~~link database, domain authority scores and keyword rankings from ~~SEO tool, AI citation data from ~~AI monitor, entity presence from ~~knowledge graph, and brand mention data from ~~brand monitor.
With manual data only: Ask the user to provide:
Proceed with the full 40-item audit using provided data. Note in the output which items could not be fully evaluated due to missing access (e.g., AI citation data, knowledge graph queries, WHOIS history).
When a user requests a domain authority audit:
### Audit Setup
**Domain**: [domain]
**Domain Type**: [auto-detected or user-specified]
**Dimension Weights**: [from domain-type weight table below]
#### Domain-Type Weight Table
> Canonical source: `references/cite-domain-rating.md`. This inline copy is for convenience.
| Dim | Default | Content Publisher | Product & Service | E-commerce | Community & UGC | Tool & Utility | Authority & Institutional |
|-----|:-------:|:-:|:-:|:-:|:-:|:-:|:-:|
| C | 35% | **40%** | 25% | 20% | 35% | 25% | **45%** |
| I | 20% | 15% | **30%** | 20% | 10% | **30%** | 20% |
| T | 25% | 20% | 25% | **35%** | 25% | 25% | 20% |
| E | 20% | 25% | 20% | 25% | **30%** | 20% | 15% |
#### Veto Check (Emergency Brake)
| Veto Item | Status | Action |
|-----------|--------|--------|
| T03: Link-Traffic Coherence | ✅ Pass / ⚠️ VETO | [If VETO: "Audit backlink profile; disavow toxic links"] |
| T05: Backlink Profile Uniqueness | ✅ Pass / ⚠️ VETO | [If VETO: "Flag as manipulation network; investigate link sources"] |
| T09: Penalty & Deindex History | ✅ Pass / ⚠️ VETO | [If VETO: "Address penalty first; all other optimization is futile"] |
If any veto item triggers, flag it prominently at the top of the report. CITE Score is capped at 39 (Poor) regardless of other scores.
Evaluate each item against the criteria in references/cite-domain-rating.md.
Score each item:
### C — Citation
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Referring Domains Volume | Pass/Partial/Fail | [specific observation] |
| C02 | Referring Domains Quality | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
| C10 | Link Source Diversity | Pass/Partial/Fail | [specific observation] |
**C Score**: [X]/100
### I — Identity
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| I01 | Knowledge Graph Presence | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**I Score**: [X]/100
Same format for Trust and Eminence dimensions.
### T — Trust
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| T01 | Link Profile Naturalness | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**T Score**: [X]/100
### E — Eminence
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| E01 | Organic Search Visibility | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**E Score**: [X]/100
Note: Some items require specialized data (C05-C08 AI citation data, I01 knowledge graph queries, T04-T05 IP/profile analysis). Score what is observable; mark unverifiable items as "N/A — requires [data source]" and exclude from dimension average.
Calculate scores and generate the final report:
## CITE Domain Authority Report
### Overview
- **Domain**: [domain]
- **Domain Type**: [type]
- **Audit Date**: [date]
- **CITE Score**: [score]/100 ([rating])
- **Veto Status**: ✅ No triggers / ⚠️ [item] triggered — Score capped at 39
### Dimension Scores
| Dimension | Score | Rating | Weight | Weighted |
|-----------|-------|--------|--------|----------|
| C — Citation | [X]/100 | [rating] | [X]% | [X] |
| I — Identity | [X]/100 | [rating] | [X]% | [X] |
| T — Trust | [X]/100 | [rating] | [X]% | [X] |
| E — Eminence | [X]/100 | [rating] | [X]% | [X] |
| **CITE Score** | | | | **[X]/100** |
**Score Calculation**: CITE Score = C × [w_C] + I × [w_I] + T × [w_T] + E × [w_E]
**Rating Scale**: 90-100 Excellent | 75-89 Good | 60-74 Medium | 40-59 Low | 0-39 Poor
### Per-Item Scores
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Referring Domains Volume | [Pass/Partial/Fail] | [observation] |
| C02 | Referring Domains Quality | [Pass/Partial/Fail] | [observation] |
| ... | ... | ... | ... |
| E10 | Industry Share of Voice | [Pass/Partial/Fail] | [observation] |
### Top 5 Priority Improvements
Sorted by: weight × points lost (highest impact first)
1. **[ID] [Name]** — [specific modification suggestion]
- Current: [Fail/Partial] | Potential gain: [X] weighted points
- Action: [concrete step]
2. **[ID] [Name]** — [specific modification suggestion]
- Current: [Fail/Partial] | Potential gain: [X] weighted points
- Action: [concrete step]
3–5. [Same format]
### Action Plan
#### Quick Wins (< 1 week)
- [ ] [Action 1]
- [ ] [Action 2]
#### Medium Effort (1-4 weeks)
- [ ] [Action 3]
- [ ] [Action 4]
#### Strategic (1-3 months)
- [ ] [Action 5]
- [ ] [Action 6]
### Cross-Reference with CORE-EEAT
For a complete assessment, pair this CITE audit with a CORE-EEAT content audit:
| Assessment | Score | Rating |
|-----------|-------|--------|
| CITE (Domain) | [X]/100 | [rating] |
| CORE-EEAT (Content) | [Run content-quality-auditor on sample pages] | — |
**Diagnosis Matrix**:
- High CITE + High CORE-EEAT → Maintain and expand
- High CITE + Low CORE-EEAT → Prioritize content quality
- Low CITE + High CORE-EEAT → Build domain authority
- Low CITE + Low CORE-EEAT → Start with content, then domain
### Recommended Next Steps
- For domain authority building: focus on top 5 priorities above
- For content improvement: use `content-quality-auditor` on key pages
- For backlink strategy: use `backlink-analyzer` for detailed link analysis
- For competitor benchmarking: use `competitor-analysis` with CITE scores
- For tracking progress: run `/seo:report` with CITE score trends
After delivering findings to the user, ask:
"Save these results for future sessions?"
If yes, write a dated summary to the appropriate memory/ path using filename YYYY-MM-DD-<topic>.md containing:
If any veto-level issue was found (CORE-EEAT T04, C01, R10 or CITE T03, T05, T09), also append a one-liner to memory/hot-cache.md without asking.
See references/example-report.md for a complete CITE audit of cloudhosting.com showing veto check, dimension scores, top 5 improvements, action plan, and cross-reference with CORE-EEAT.