From sync-docs
Use when user asks to "update docs", "sync documentation", "fix outdated docs", "update changelog", "docs are stale", or after completing code changes that might affect documentation.
npx claudepluginhub agent-sh/sync-docs --plugin sync-docs[report|apply] [--scope=recent|all|before-pr] [path]# /sync-docs - Documentation Sync Sync documentation with actual code state. Finds docs that reference changed files, updates CHANGELOG, and flags outdated examples. ## Architecture ## Constraints 1. **Preserve existing content** - Update references, don't rewrite sections 2. **Minimal changes** - Only fix what's actually outdated 3. **Evidence-based** - Every change linked to a specific code change 4. **Safe defaults** - Report mode by default ## Arguments Parse from $ARGUMENTS: - **Mode**: `report` (default) or `apply` - **Scope**: - `--scope=recent` (default): Files changed si...
/sync-docsSynchronizes documentation files (prompt_plan.md, spec.md, CLAUDE.md, rules/) with recent code changes from git diff analysis. Supports --check-only mode.
/sync-docsExecutes the learnship sync-docs workflow end-to-end to detect stale documentation after code changes, preserving all gates, validations, checkpoints, and routing.
/sync-docsSynchronizes latest FedRAMP 20X machine-readable policies from official GitHub repo. Supports check, full, force modes. Outputs updated policies list, changelog, new requirements, deprecations.
/sync-docsSync markdown documents to/from Confluence — convert .md ↔ Confluence XHTML, push and pull pages.
/sync-docsFetch the latest Claude Agent SDK TypeScript documentation online, compare it against this plugin's reference files, and apply updates for any new options, hook events, message types, or behavioral changes. Run this periodically to keep the plugin accurate.
/sync-docs**Input**: Repository reference and Confluence page ID or URL
Sync documentation with actual code state. Finds docs that reference changed files, updates CHANGELOG, and flags outdated examples.
/sync-docs command (you are here)
|
v
sync-docs-agent (sonnet)
|-- Invoke sync-docs skill
|-- Return structured results
|
v
Command processes results
|-- If apply mode + fixes: spawn simple-fixer
|-- Present final results
Parse from $ARGUMENTS:
report (default) or apply--scope=recent (default): Files changed since last commit to main--scope=all: Scan all docs against all code--scope=before-pr: Files in current branch (for /next-task integration)path: Specific file or directoryconst args = '$ARGUMENTS'.split(' ').filter(Boolean);
const mode = args.includes('apply') ? 'apply' : 'report';
const scopeArg = args.find(a => a.startsWith('--scope='));
const scope = scopeArg ? scopeArg.split('=')[1] : 'recent';
const pathArg = args.find(a => !a.startsWith('--') && a !== 'report' && a !== 'apply');
let repoIntelContext = '';
try {
const { binary } = require('@agentsys/lib');
const { getStateDirPath } = require('@agentsys/lib/platform/state-dir');
const fs = require('fs');
const cwd = process.cwd();
const mapFile = require('path').join(getStateDirPath(cwd), 'repo-intel.json');
const q = (args) => { try { return JSON.parse(binary.runAnalyzer(args)); } catch { return null; } };
if (fs.existsSync(mapFile)) {
// Light pre-fetch so the orchestrator can decide whether to
// surface an "analyzer signals available" hint before spawning
// the agent. The actual signal consumption happens inside the
// skill via lib/collectors/analyzer-queries, which pulls the
// full four-query bundle (stale-docs, doc-drift, entry-points,
// slop-fixes). We only need presence signals here to avoid
// double-fetching the same data into the prompt.
const staleDocs = q(['repo-intel', 'query', 'stale-docs', '--top', '30', '--map-file', mapFile, cwd]);
const docDrift = q(['repo-intel', 'query', 'doc-drift', '--top', '20', '--map-file', mapFile, cwd]);
if ((staleDocs && staleDocs.length > 0) || (docDrift && docDrift.length > 0)) {
repoIntelContext = '\n\nRepo-intel doc analysis (use this data, do NOT re-scan):';
if (staleDocs && staleDocs.length > 0) {
repoIntelContext += '\n\nStale doc references (symbol-level - highest priority):';
staleDocs.forEach(s => {
repoIntelContext += `\n ${s.doc}:${s.line} "${s.reference}" [${s.issue}] ${s.suggestion}`;
});
}
if (docDrift && docDrift.length > 0) {
const severelyStale = docDrift.filter(d => d.codeCoupling === 0).map(d => d.path);
if (severelyStale.length > 0) {
repoIntelContext += '\n\nDocs with ZERO code coupling (likely entirely stale): ' + severelyStale.join(', ');
}
}
}
// Note: entry-points + slop-fixes (orphan-export,
// passthrough-wrapper) are fetched inside the skill via
// lib/collectors/analyzer-queries and surfaced in the skill's
// JSON output as `undocumentedExports`, `documentsDeadCode`,
// and `documentsWrapper`. Not pre-fetched here to avoid
// duplicating analyzer calls across the command + skill path.
}
} catch (e) { /* unavailable */ }
const agentOutput = await Task({
subagent_type: "sync-docs:sync-docs-agent",
model: "sonnet",
prompt: `Sync documentation with code state.
Mode: ${mode}
Scope: ${scope}
${pathArg ? `Path: ${pathArg}` : ''}
Execute the sync-docs skill and return structured results.${repoIntelContext}`
});
Parse the structured JSON from between === SYNC_DOCS_RESULT === markers:
// Helper to extract JSON result from agent output
function parseSyncDocsResult(output) {
const match = output.match(/=== SYNC_DOCS_RESULT ===[\s\S]*?({[\s\S]*?})[\s\S]*?=== END_RESULT ===/);
if (!match) {
throw new Error('No SYNC_DOCS_RESULT found in agent output');
}
return JSON.parse(match[1]);
}
const result = parseSyncDocsResult(agentOutput);
// result now contains: { mode, scope, validation, discovery, issues, fixes, changelog, summary }
If mode is apply and fixes array is non-empty:
if (mode === 'apply' && result.fixes.length > 0) {
await Task({
subagent_type: "next-task:simple-fixer",
model: "haiku",
prompt: `Apply these documentation fixes:
${JSON.stringify(result.fixes, null, 2)}
Use the Edit tool to apply each fix. Commit message: "docs: sync documentation with code changes"`
});
}
## Documentation Sync ${mode === 'apply' ? 'Applied' : 'Report'}
### Scope
- Mode: ${mode}
- Scope: ${scope}
- Changed files analyzed: ${result.discovery.changedFilesCount}
- Related docs found: ${result.discovery.relatedDocsCount}
### Validation Results
**Count/Version Validation**
${result.validation.counts.status === 'ok' ? '[OK] Counts and versions aligned' : `[WARN] ${result.validation.counts.summary.issueCount} issues found`}
**Cross-Platform Validation**
${result.validation.crossPlatform.status === 'ok' ? '[OK] Cross-platform docs valid' : `[WARN] ${result.validation.crossPlatform.summary.issueCount} issues found`}
### Documentation Issues
${result.issues.length === 0 ? '[OK] No documentation issues detected' :
result.issues.map(i => `- **${i.doc}:${i.line || '?'}** (${i.severity}): ${i.suggestion || i.type}`).join('\n')}
### CHANGELOG Status
${result.changelog.status === 'ok' ? '[OK] All changes documented' :
`[WARN] ${result.changelog.undocumented.length} commits may need entries:\n${result.changelog.undocumented.map(c => ` - ${c}`).join('\n')}`}
${mode === 'apply' && result.fixes.length > 0 ? `
### Fixes Applied
${result.fixes.map(f => `- **${f.file}**: ${f.type}`).join('\n')}
` : ''}
${mode === 'report' && result.fixes.length > 0 ? `
## Do Next
- [ ] Run \`/sync-docs apply\` to fix ${result.fixes.length} auto-fixable issues
- [ ] Review flagged items manually
` : ''}
# Check what docs might need updates (safe, no changes)
/sync-docs
# Check docs related to specific path
/sync-docs report src/auth/
# Apply safe fixes
/sync-docs apply
# Full codebase scan
/sync-docs report --scope=all
# For PR preparation
/sync-docs apply --scope=before-pr
This command is also used by /next-task Phase 11:
// Phase 11 invocation
await Task({
subagent_type: "sync-docs:sync-docs-agent",
prompt: "Sync docs for PR. Mode: apply, Scope: before-pr"
});