From cc-arsenal
Validate documentation freshness, completeness, and quality against the current codebase state. This skill should be used when users want to check documentation health, find stale docs, detect hallucinations in documentation, or audit documentation quality.
npx claudepluginhub mgiovani/cc-arsenal --plugin cc-arsenal-teamsThis skill is limited to using the following tools:
> **Cross-Platform AI Agent Skill**
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Cross-Platform AI Agent Skill This skill works with any AI agent platform that supports the skills.sh standard.
Validate documentation freshness, completeness, and quality against current codebase state.
CRITICAL: This skill should DETECT hallucinations in existing docs:
Spawn parallel subagents for each documentation file:
Even when checking ONE document, spawn subagents for each logical section. See references/verification-patterns.md for detailed section-level verification patterns.
Verification categories to parallelize:
command argumentscore, data, infrastructure, alldocs/A. Relevance Check: Determine which docs are relevant, identify missing documentation for detected technologies.
B. Freshness Check: Compare doc last-modified dates with related code changes using git history. Flag docs not updated after significant code changes.
C. Completeness Check: Verify all required sections are present, check for unreplaced {{PLACEHOLDER}} values, ensure diagrams exist where expected.
D. Quality Check: Validate Mermaid diagram syntax, check for broken internal links, verify markdown formatting, check for empty sections.
For detailed verification commands and bash patterns, see references/verification-patterns.md.
See references/scoring-criteria.md for complete scoring rubrics.
Score categories per document:
Overall Score: Average of all document scores, weighted by importance (core docs > others).
Produce a comprehensive report containing:
Documentation Health Report
Overall Score: 85/100
Good (3 docs):
docs/architecture.md (Score: 95/100)
docs/onboarding.md (Score: 92/100)
docs/adr/ (5 records, Score: 88/100)
Needs Attention (2 docs):
docs/data-model.md (Score: 65/100)
docs/deployment.md (Score: 58/100)
Missing (2 docs):
docs/security.md
docs/api-documentation.md
Quality Issues:
docs/data-model.md:42 - Invalid Mermaid syntax
docs/architecture.md:15 - Broken link to [non-existent.md]
### Recommendations Format
Priority Recommendations:
HIGH: Update data model documentation Command: docs-diagram er Reason: Schema changed 5 days ago, ER diagram missing
MEDIUM: Fix deployment documentation placeholders Command: docs-update deployment Reason: Contains unreplaced placeholders
LOW: Add security documentation Command: docs-diagram security Reason: Good practice for complete documentation
Check all documentation:
docs-check
With specific focus:
docs-check core docs-check data docs-check focus on database documentation Quick check:
docs-check quick
## Important Notes
- **Non-destructive**: Only reads, never modifies documentation
- **Git-aware**: Uses git history to assess freshness
- **Context-aware**: Understands project type and relevant docs
- **Actionable**: Provides specific commands to fix issues
- **Incremental**: Can be run frequently
## When to Run
- Before onboarding new team members
- During documentation reviews
- After major refactoring
- As part of pre-release checklist
- When documentation feels stale
- Regularly (weekly or bi-weekly)
## Additional Resources
- For detailed verification commands and bash patterns, see [references/verification-patterns.md](references/verification-patterns.md)
- For complete scoring rubrics and thresholds, see [references/scoring-criteria.md](references/scoring-criteria.md)
## Claude Code Enhanced Features
This skill includes the following Claude Code-specific enhancements:
## Workflow
### Phase 1: Parallel Documentation Analysis (Use SubAgents)
#### For Multiple Documents
Spawn parallel subagents for each documentation file:
Use Task tool with multiple parallel agents:
Agent 1 - Core Docs Verification:
Agent 2 - Data Docs Verification:
Agent 3 - Explore Codebase Reality:
#### For Single Document (Section-Level Verification)
Even when checking ONE document, spawn subagents for each logical section. See [references/verification-patterns.md](references/verification-patterns.md) for detailed section-level verification patterns.
**Verification categories to parallelize**:
- Component/service names - Do they exist?
- Numeric counts - Are they accurate?
- Diagram entities - Are they real?
- File/path references - Do files exist?
- Technology claims - Are they in package files?
- Relationship claims - Do the connections exist in code?
### Phase 2: Parse Arguments
1. Extract optional focus area from `$ARGUMENTS`
2. Focus areas: `core`, `data`, `infrastructure`, `all`
3. Default: check all documentation
### Phase 3: Scan and Analyze
1. Find all documentation files in `docs/`
2. Identify documentation types present
3. Check for ADRs and RFCs
4. Detect technology stack, database presence, deployment configs, project type
### Phase 4: Perform Validation Checks
**A. Relevance Check**: Determine which docs are relevant, identify missing documentation for detected technologies.
**B. Freshness Check**: Compare doc last-modified dates with related code changes using git history. Flag docs not updated after significant code changes.
**C. Completeness Check**: Verify all required sections are present, check for unreplaced `{{PLACEHOLDER}}` values, ensure diagrams exist where expected.
**D. Quality Check**: Validate Mermaid diagram syntax, check for broken internal links, verify markdown formatting, check for empty sections.
For detailed verification commands and bash patterns, see [references/verification-patterns.md](references/verification-patterns.md).
### Phase 5: Calculate Scores
See [references/scoring-criteria.md](references/scoring-criteria.md) for complete scoring rubrics.
**Score categories per document**:
- **Freshness** (0-100): Based on recency of updates relative to code changes
- **Completeness** (0-100): Based on section coverage and placeholder replacement
- **Quality** (0-100): Based on formatting, diagram validity, link integrity
**Overall Score**: Average of all document scores, weighted by importance (core docs > others).
### Phase 6: Generate Report
Produce a comprehensive report containing:
- Status summary with overall score
- List of documents by status (Good, Needs Attention, Missing)
- **Hallucination Report** - Claims that do not match reality
- Quality issues with specific locations and line numbers
- Actionable recommendations with specific commands