From apcore-skills
Unified cross-language consistency verification and documentation alignment. Phase A: verifies feature specs and protocol spec match all language implementations (classes, functions, parameters, return types) via itemized checklist comparison. Phase B: verifies all documentation (PRD, SRS, Tech Design, Test Plan, Feature Specs, PROTOCOL_SPEC, README, examples) is internally consistent and free of contradictions. Covers both apcore core SDKs and apcore-mcp bridges.
npx claudepluginhub tercel/tercel-claude-plugins --plugin apcore-skillsThis skill uses the workspace's default tool permissions.
Unified consistency verification across all apcore ecosystem documentation and implementations.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Unified consistency verification across all apcore ecosystem documentation and implementations.
DOCUMENTATION REPOS ARE THE SINGLE SOURCE OF TRUTH. Phase A verifies code matches specs. Phase B verifies docs are internally consistent. Never skip a phase. Never skip a checklist item.
| Thought | Reality |
|---|---|
| "Python is the reference, just copy it" | The documentation repo is the authority. Python may have diverged too. |
| "This naming difference is just language convention" | Convention differences (snake_case vs camelCase) are expected. Semantic differences (get_module vs findModule) are bugs. |
| "Extra methods in one SDK are fine" | Language-specific additions are OK only if documented. Undocumented extras indicate drift. |
| "I'll just compare export counts" | Count matching is necessary but not sufficient. Signature-level comparison is required. |
| "Docs look close enough" | If the code says get_module() but the README says find_module(), that is a bug. Every symbol must match exactly. |
| "I can check docs without verifying code first" | Phase B depends on Phase A. If Phase A has not established ground truth, docs verification is comparing against potentially wrong code. |
| "Checking a few symbols is representative" | Build the complete checklist. Compare every item. Partial checks create false confidence. |
| "CHANGELOG has the wrong API name" | CHANGELOG is a release artifact, not documentation. Leave it to the release skill. |
| "PRD is product-level, no need to check against code" | If the PRD says a feature exists but no implementation matches, that is a gap. Every layer must agree. |
/apcore-skills:sync [repo1,repo2,...] [--phase a|b|all] [--fix] [--scope core|mcp|all] [--lang python,typescript,...] [--save]
| Argument / Flag | Default | Description |
|---|---|---|
| positional repos | — | Comma-separated repo names to sync. See Positional Repo Arguments below. |
--phase | all | Which phase to run: a (spec vs implementation), b (documentation internal consistency), all (A then B) |
--fix | off | Auto-fix issues (naming, stubs, doc references) |
--scope | cwd | Which group: core, mcp, all. If omitted and no positional repos, defaults to the current working directory's repo only. Use --scope all to scan all repos. |
--lang | all discovered | Comma-separated list of languages to compare |
--save | off | Save report to file |
Positional repo names control exactly which repos are included. They take priority over --scope and CWD-based defaults.
Multiple repos (explicit set):
/apcore-skills:sync apcore,apcore-typescript,apcore-python
Syncs exactly these 3 repos — no expansion, no auto-discovery. The doc repo and impl repos are determined from the provided list:
protocol repo (e.g., apcore) is in the list → it becomes doc_repo for core scopedocs-site repo (e.g., apcore-mcp) is in the list → it becomes doc_repo for mcp scopecore-sdk repo implies apcore/ as doc repo; a mcp-bridge repo implies apcore-mcp/ as doc repo)impl_reposSingle repo name — smart expansion:
/apcore-skills:sync apcore
A single repo name triggers smart expansion based on the repo's type:
protocol repo (apcore) → expand to apcore + all discovered core-sdk repos (apcore-{lang}). Does NOT include mcp-bridge repos.docs-site repo (apcore-mcp) → expand to apcore-mcp + all discovered mcp-bridge repos (apcore-mcp-{lang})core-sdk repo (apcore-python) → expand to this repo + its doc_repo (apcore). Only this single impl repo, not all core SDKs.mcp-bridge repo (apcore-mcp-python) → expand to this repo + its doc_repo (apcore-mcp). Only this single impl repo.integration repo → Phase A N/A, Phase B only on this reposhared-lib or tooling repo → Phase B only on this repoDisplay: "Scope: {repo-names} (from positional args)"
No positional repos: Falls through to --scope or CWD-based default (see Step 1.1).
Each --scope group has one documentation repo (single source of truth) and one or more implementation repos (language-specific code):
| Scope | Documentation Repo | Contents | Implementation Repos |
|---|---|---|---|
core | apcore/ | PROTOCOL_SPEC.md, PRD, SRS, Tech Design, Test Plan, Feature Specs | apcore-python/, apcore-typescript/, ... |
mcp | apcore-mcp/ | PRD, SRS, Tech Design, Test Plan, Feature Specs | apcore-mcp-python/, apcore-mcp-typescript/, ... |
Implementation repos contain only code and a README. They do NOT contain PRD/SRS/Tech Design/Test Plan/Feature Specs — those live exclusively in the documentation repo.
All per-repo operations use parallel sub-agents. The main context ONLY handles:
Phase A Step 2 spawns one sub-agent per implementation repo, all simultaneously for API extraction. Phase B Step 6 spawns one sub-agent per documentation repo + one per implementation repo, all simultaneously for documentation auditing. Fix steps spawn one sub-agent per repo for applying corrections.
Step 0 (ecosystem) → Step 1 (parse args) → PHASE A [Steps 2-5] → PHASE B [Steps 6-8] → Step 9 (combined report + review-compatible output) → [Step 10 (fix)]
@../shared/ecosystem.md
Filter repos based on --scope and --lang flags. Identify documentation repos and implementation repos per scope group.
Parse $ARGUMENTS for all flags and positional repo names. Determine:
--scope, or CWD)Resolution priority: Positional repo args > --scope flag > CWD-based default.
If positional repo names are provided (comma-separated, non-flag tokens in $ARGUMENTS):
repos[] from ecosystem discovery. If a name is not found, report error: "Repo '{name}' not found in ecosystem. Available: {repo names}"--scope and CWD-based default logic entirely"Scope: {repo-names} (from positional args). {N} doc repos, {N} impl repos."Only applies when NO positional repos are provided.
If --scope is NOT specified:
apcore-python)core-sdk repo → set scope to core, filter impl_repos to only this repomcp-bridge repo → set scope to mcp, filter impl_repos to only this repoprotocol repo (apcore/) → set scope to core, include all core impl repos (user is editing the spec, so check all implementations against it)docs-site repo (apcore-mcp/) → set scope to mcp, include all mcp impl reposintegration repo → Phase A is N/A (integrations don't have a protocol spec to compare against), run Phase B only on this reposhared-lib or tooling repo → run Phase B only on this repo (no spec to compare against)AskUserQuestion to ask: "CWD is not an apcore repo. Which repo do you want to sync?" with options from repos[] names + "All repos (full ecosystem scan)"If --scope IS specified: use the explicit scope as before.
For each scope group, resolve:
doc_repo — the documentation repo path (e.g., apcore/ for core, apcore-mcp/ for mcp)impl_repos[] — the implementation repo paths (may be a single repo if CWD-scoped)Single-SDK handling: If a scope group contains fewer than 2 implementation repos:
Display:
Sync scope: {scope} {("(from CWD)" if defaulted)}
Languages: {lang1}, {lang2}, ...
Doc repos: {doc_repo1}, {doc_repo2}
Impl repos: {impl_repo1}, {impl_repo2}, ...
Phases: {A only | B only | A then B}
Mode: {report only | auto-fix}
Verify that the documentation repo's feature specs and protocol spec match what each language implementation actually exports. Build an explicit checklist, compare every item.
Spawn one Task(subagent_type="general-purpose") per implementation repo, all simultaneously in a single round of parallel Task calls. Each sub-agent extracts the public API from one repo independently. Do NOT process repos sequentially.
Sub-agent prompt:
Extract the complete public API surface from {repo_path}.
Follow the API Extraction Protocol:
1. Read the main export file:
- Python: src/{package}/__init__.py — extract all imports and __all__
- TypeScript: src/index.ts — extract all export statements
2. For each exported symbol, read its source file and extract:
- Kind: class | function | type | enum | constant | interface
- Name (in this language's convention)
- For classes: constructor params (name, type, required, default), all public methods with full signatures
- For functions: params (name, type, required, default), return type, async flag
- For enums: all member names and values
- For types/interfaces: all fields (name, type, required)
- For constants: name, type, value
3. Also extract:
- Error classes: name, error code, parent class
- Middleware interfaces: method signatures
- Extension points: discoverer, validator, exporter interfaces
Return a structured summary in this exact format:
REPO: {repo-name}
LANGUAGE: {language}
VERSION: {version}
EXPORT_COUNT: {N}
CLASSES:
- {ClassName}
constructor({param1}: {type1}, {param2}: {type2} = {default})
methods:
- {method_name}({params}) -> {return_type} [async]
- ...
FUNCTIONS:
- {function_name}({params}) -> {return_type} [async]
ENUMS:
- {EnumName}: {MEMBER1}={value1}, {MEMBER2}={value2}, ...
TYPES:
- {TypeName}: {field1}: {type1}, {field2}: {type2}, ...
ERRORS:
- {ErrorName}(code={CODE}, parent={ParentError})
CONSTANTS:
- {NAME}: {type} = {value}
Error handling:
- If the repo path does not exist, return: REPO: {repo-name}, STATUS: NOT_FOUND
- If the main export file is missing or empty, return: REPO: {repo-name}, STATUS: NO_EXPORTS, REASON: {description}
- If individual source files cannot be read, skip them and note in the summary
Keep the summary concise but complete. Target ~3-5KB.
Main context retains: Each repo's structured API summary. Store as api_summaries[repo_name].
For each documentation repo in scope, read the authoritative specs:
For apcore/ (core scope):
{doc_repo_path}/PROTOCOL_SPEC.md — extract the API contract sections{doc_repo_path}/docs/features/*.md — extract per-feature API definitions (classes, functions, parameters, return types){doc_repo_path}/docs/spec/type-mapping.md exists — load cross-language type mappingsFor apcore-mcp/ (mcp scope):
{doc_repo_path}/docs/features/*.md — extract per-feature API definitionsStore as spec_api[scope] — the canonical API that all implementations in this scope must match.
@../shared/api-extraction.md
Build an explicit per-symbol checklist and evaluate every single item. No shortcuts.
From spec_api and all api_summaries, construct a union of all symbols. Apply canonical name normalization for matching, differentiated by symbol kind:
Classes / Types / Enums / Interfaces → normalize to PascalCase:
PascalCase (pass through)Functions / Methods / Variables / Constants → normalize to snake_case:
snake_case (pass through)camelCase → snake_casePascalCase → snake_case (exported functions)snake_case (pass through)camelCase → snake_casePascalCase → snake_casecamelCase → snake_casecamelCase → snake_casecamelCase → snake_caseFor each symbol, create a checklist row covering every checkable property.
For each CLASS:
┌────────────────────────┬──────────┬──────────┬──────────┬──────────┐
│ Check Item │ Spec │ Python │ TypeScript│ Status │
├────────────────────────┼──────────┼──────────┼──────────┼──────────┤
│ Registry │ │ │ │ │
│ ├─ class exists │ ✓ │ ✓ │ ✓ │ PASS │
│ ├─ constructor params │ │ │ │ │
│ │ ├─ config: Config │ required │ required │ required │ PASS │
│ │ └─ discoverers │ optional │ optional │ MISSING │ FAIL │
│ ├─ method: register │ │ │ │ │
│ │ ├─ exists │ ✓ │ ✓ │ ✓ │ PASS │
│ │ ├─ name convention │ register │ register │ register │ PASS │
│ │ ├─ params │ (module) │ (module) │ (module) │ PASS │
│ │ ├─ return type │ None │ None │ void │ PASS │
│ │ └─ async │ no │ no │ no │ PASS │
│ ├─ method: get_module │ │ │ │ │
│ │ ├─ exists │ ✓ │ ✓ │ ✓ │ PASS │
│ │ ├─ name convention │ get_mod │ get_mod │ getMod │ PASS │
│ │ ├─ params │ (id) │ (id) │ (id) │ PASS │
│ │ └─ return type │ Module? │ Module? │ Module? │ PASS │
│ ├─ method: scan_dir │ │ │ │ │
│ │ ├─ exists │ ✓ │ ✓ │ ✗ │ FAIL │
│ │ ... │ │ │ │ │
└────────────────────────┴──────────┴──────────┴──────────┴──────────┘
Checklist items per CLASS:
For each FUNCTION:
For each ENUM:
For each TYPE/INTERFACE:
For each ERROR CLASS:
For each CONSTANT:
For each implementation repo, compare against the spec API:
Store the full evaluated checklist as phase_a_results:
verified_api — the spec-defined API surface with per-symbol verification status from implementations. Definition: the union of all symbols from the spec, annotated with which implementations have them and whether they match. For PASS symbols, the spec definition is confirmed correct. For FAIL symbols, the spec definition is still authoritative (implementations are wrong).checklist — every item with its PASS/FAIL/WARN statusfindings — structured list of all failures with severityThis verified_api becomes the input truth for Phase B. When injecting into Phase B sub-agent prompts, format as:
VERIFIED API ({repo-name}):
CLASSES:
- {ClassName}: constructor({params}), methods: {method1}({params}) -> {ret}, ...
FUNCTIONS:
- {func_name}({params}) -> {ret}
TYPES:
- {TypeName}: {field1}: {type1}, {field2}: {type2}, ...
ENUMS:
- {EnumName}: {MEMBERS}
CONSTANTS:
- {NAME}: {type} = {value}
ERRORS:
- {ErrorName}(code={CODE})
═══ PHASE A: Spec ↔ Implementation Consistency ═══
Scope: {scope}
Doc repo: {doc_repo} → Impl repos: {impl1}, {impl2}, ...
Checklist: {total_items} items checked
PASS: {n}
FAIL: {n}
WARN: {n}
Spec compliance:
{impl-repo-1}: {N}/{total} symbols ({pct}%) ✓
{impl-repo-2}: {N}/{total} symbols ({pct}%) ⚠ {missing} missing
Cross-implementation:
Total symbols: {N}
Matching: {N}
Missing: {N}
Signature mismatch: {N}
Naming inconsistency: {N}
Type mismatch: {N}
FAIL items (expanded):
❌ Registry.scan_directory()
Present in: spec, apcore-python
Missing in: apcore-typescript
Spec: defined in docs/features/registry.md
❌ Executor.execute() — param mismatch
Spec: (module_id: str, input: dict, context: Context | None = None) -> ExecutionResult
Python: (module_id: str, input: dict, context: Context | None = None) -> ExecutionResult ✓
TypeScript: (moduleId: string, input: Record<string, unknown>) -> ExecutionResult ✗ missing context param
If --save flag: write report to {ecosystem_root}/sync-report-phase-a-{date}.md.
If --phase a only: display this report and stop. Otherwise continue to Phase B.
Phase B runs ONLY after Phase A completes. It verifies two things:
verified_api from Phase ASpawn sub-agents in parallel: one per documentation repo + one per implementation repo, all simultaneously.
Sub-agent prompt:
Audit internal documentation consistency in {doc_repo_path}.
This is a DOCUMENTATION REPO containing specs and feature definitions. Check that
all documents are internally consistent — no contradictions between layers.
=== SCOPE 1: Spec Chain Consistency ===
Read all available documents from the spec chain:
- PRD (if exists): docs/prd.md or similar
- SRS (if exists): docs/srs.md or similar
- Tech Design (if exists): docs/tech-design.md or similar
- Test Plan (if exists): docs/test-plan.md or similar
- Feature Specs: docs/features/*.md
- Protocol Spec (if exists): PROTOCOL_SPEC.md
For each API symbol (class, function, parameter, return type) mentioned across multiple documents:
1. Collect ALL references: which document, what section, what it says
2. Compare: do all documents agree on the symbol's name, parameters, behavior, and types?
3. Flag contradictions:
- PRD says feature X has capability A, but feature spec says no such capability
- SRS requirement REQ-001 references function foo(), but tech design calls it bar()
- Test plan tests for param "timeout" but feature spec defines it as "max_wait"
- Feature spec A says Registry has method scan(), feature spec B says it's discover()
=== SCOPE 2: Feature Spec Completeness ===
For each feature spec in docs/features/:
1. Does it define clear API symbols (classes, functions, params, return types)?
2. Are there features mentioned in PRD/SRS that have NO corresponding feature spec?
3. Are there feature specs that are NOT referenced by any higher-level document?
=== SCOPE 3: Cross-Document Reference Integrity ===
Check all internal cross-references:
1. Do documents reference sections/features that actually exist?
2. Are version numbers consistent across documents?
3. Are terminology and naming consistent (same concept uses same name everywhere)?
Return findings in this exact format:
DOC_REPO: {repo-name}
DOCUMENTS_FOUND: {list of documents checked with paths}
SPEC_CHAIN:
LAYERS_CHECKED: {list: prd, srs, tech-design, test-plan, feature-specs, protocol-spec}
CONTRADICTIONS: {N}
GAPS: {N}
FINDING_COUNT: {N}
FINDINGS:
- severity: {critical|warning|info}
scope: {spec-chain|completeness|cross-ref}
detail: {description}
locations:
- {file1:section} says: "{quote1}"
- {file2:section} says: "{quote2}"
contradiction: {what disagrees}
fix: {which document should be authoritative and what to change}
Error handling:
- If the documentation repo path does not exist, return: DOC_REPO: {repo-name}, STATUS: NOT_FOUND
- If no spec chain documents are found (no PRD, SRS, tech design, feature specs), return: DOC_REPO: {repo-name}, STATUS: NO_DOCS, DOCUMENTS_FOUND: []
- If individual files cannot be read, skip them and list in DOCUMENTS_FOUND as "{path} (unreadable)"
Sub-agent prompt:
Audit documentation in {impl_repo_path} for consistency with the verified API surface.
This is an IMPLEMENTATION REPO. It should contain only code and a README (plus optional examples).
It does NOT contain PRD/SRS/Tech Design/Test Plan/Feature Specs.
VERIFIED API (ground truth from Phase A):
{verified_api for this repo — the confirmed-correct API symbols, signatures, types}
=== SCOPE 1: README ===
1. Read README.md
2. Check required sections: Title/badges, Description, Installation, Quick Start, Features, API Overview, Docs link, License
3. For Installation: verify package name matches build config (pyproject.toml/package.json)
4. For Quick Start code examples: extract all API references, verify they match verified API
- Import names correct?
- Class names correct?
- Method names correct?
- Parameter names and order correct?
5. For API Overview: verify listed classes/functions exist in verified API with correct descriptions
6. For version references: verify they match current version in build config
=== SCOPE 2: API References in Markdown ===
Search ALL markdown files in the repo (README.md, docs/**/*.md) for API symbol references:
1. For EACH symbol reference found:
a. Does the symbol exist in the verified API?
b. Do parameter names/order match the verified signature?
c. Are import paths correct?
d. Are return types correctly described?
2. Cross-check: do different markdown files contradict each other?
- If README says `get_module(id)` but docs/usage.md says `get_module(module_id)` → CONTRADICTION
=== SCOPE 3: Example Code ===
1. Scan examples/, demo/, example/ directories
2. For each example source file (*.py, *.ts, *.js):
a. Extract import statements and API usage
b. Cross-reference against verified API — correct class names, method names, params?
c. Check dependency versions reference correct SDK version
3. Check example README exists with setup instructions
=== SCOPE 4: Cross-Document Contradiction Detection ===
For every API symbol mentioned in more than one place within this repo:
1. Collect all references (file, line, what it says)
2. Compare: do all references agree on name, params, behavior?
3. Flag any contradictions between documents
Return findings in this exact format:
REPO: {repo-name}
README:
SECTIONS_PRESENT: {list}
SECTIONS_MISSING: {list}
API_MISMATCHES: {list of references that don't match verified API}
VERSION_MISMATCHES: {list}
INSTALL_CORRECT: true|false
API_REFS:
REFERENCES_CHECKED: {N}
MISMATCHES: {N}
EXAMPLES:
EXAMPLE_DIRS: {list or "none"}
MISMATCHES: {N}
CONTRADICTIONS: {N}
{list of cases where different docs within this repo say different things}
FINDING_COUNT: {N}
FINDINGS:
- severity: {critical|warning|info}
scope: {readme|api-refs|examples|contradiction}
detail: {description}
location: {file:section or file:line}
verified_api_says: {correct value from Phase A}
doc_says: {what the doc currently says}
fix: {suggested fix}
Error handling:
- If the repo path does not exist, return: REPO: {repo-name}, STATUS: NOT_FOUND
- If README.md is missing, report SECTIONS_PRESENT: [], SECTIONS_MISSING: ["all"], and continue checking other scopes
- If no markdown files are found, skip API_REFS scope and report REFERENCES_CHECKED: 0
- If no example directories exist, report EXAMPLE_DIRS: "none", MISMATCHES: 0
Main context retains: Structured findings per repo.
After collecting all per-repo findings, the main context performs cross-repo checks:
Registry.get_module() with certain behavior, and apcore-typescript's README describes Registry.getModule() differently, flag it (semantic description should match even if name differs by convention)═══ PHASE B: Documentation Internal Consistency ═══
--- Documentation Repos ---
{doc_repo_1} ({scope}):
Spec chain layers: {list}
Contradictions: {N}
Completeness gaps: {N}
Cross-ref issues: {N}
CONTRADICTIONS:
⚠ PRD §3.2 says "Registry supports glob patterns"
but feature spec registry.md defines no glob parameter
⚠ SRS REQ-012 references "Executor.run()"
but tech design §4.1 calls it "Executor.execute()"
--- Implementation Repos ---
Repo | README | API Refs | Examples | Cross-Doc
apcore-python | PASS | PASS | — | PASS
apcore-typescript | WARN | FAIL | — | FAIL
apcore-rust | PASS | PASS | — | PASS
apcore-mcp-python | PASS | PASS | — | PASS
apcore-mcp-typescript | WARN | PASS | — | PASS
MISMATCHES:
❌ apcore-typescript README Quick Start uses `findModule()`
but verified API says `getModule()`
❌ apcore-typescript docs/usage.md says `execute(moduleId, input)`
but verified API says `execute(moduleId, input, context?)`
--- Cross-Repo ---
Cross-repo contradictions: {N}
Link consistency: {PASS|FAIL}
If --save flag: write report to {ecosystem_root}/sync-report-phase-b-{date}.md.
apcore-skills sync — Unified Consistency Report
Scope: {scope} | Languages: {langs} | Date: {date}
Phases: A (spec ↔ code) + B (documentation)
═══ PHASE A: Spec ↔ Implementation ═══
Checklist: {N} items | PASS: {n} | FAIL: {n} | WARN: {n}
{checklist table — only FAIL/WARN items expanded}
Spec compliance:
{impl-repo-1}: {N}/{total} ({pct}%)
{impl-repo-2}: {N}/{total} ({pct}%)
Cross-implementation:
Total: {N} | Match: {N} | Missing: {N} | Mismatch: {N} | Naming: {N} | Type: {N}
═══ PHASE B: Documentation Consistency ═══
Doc repo internal:
{doc-repo}: {N} contradictions, {N} gaps
Implementation repo docs:
Repo | README | API Refs | Examples | Cross-Doc
(matrix)
Cross-repo: {N} contradictions
═══ COMBINED FINDINGS (sorted by severity) ═══
CRITICAL:
[A-001] Missing API: Registry.scan_directory()
Repo: apcore-typescript
Spec: defined in apcore/docs/features/registry.md
Phase A — present in spec + Python, missing in TypeScript
[B-001] Spec chain contradiction
Doc repo: apcore
PRD says "glob patterns" but feature spec has no glob param
Phase B — internal documentation inconsistency
[B-002] API reference mismatch
Repo: apcore-typescript
README uses findModule(), verified API says getModule()
Phase B — implementation doc does not match verified code
WARNING:
[A-002] ...
[B-003] ...
INFO:
...
═══ SUMMARY ═══
Phase A: {N} findings (critical: {n}, warning: {n}, info: {n})
Phase B: {N} findings (critical: {n}, warning: {n}, info: {n})
Total: {N} findings
Contradictions (doc internal): {N}
Contradictions (cross-repo): {N}
If --save flag: write report to {ecosystem_root}/sync-report-{date}.md.
After the Combined Report, ALWAYS append a review-compatible report so that /code-forge:fixbug --review can directly consume it.
Convert all CRITICAL and WARNING findings from both phases into code-forge:review format. Format follows code-forge:review output schema (see code-forge/skills/review/SKILL.md). If the review format changes, update this mapping accordingly.
Use the # Project Review: header with a dynamic scope description (derived from Step 1 — e.g., repo name, scope group, or "all") and structured issue entries. Output the review-compatible report as raw markdown (not inside a fenced code block) so that fixbug can parse it from the conversation context.
# Project Review: {scope_description}
## Consistency
{For each finding from Phase A and Phase B with severity critical or warning, emit one issue entry:}
- severity: <blocker | critical | warning>
file: {target file path — the file that needs to be fixed}
line: {line number or range, use 1 if unknown}
title: [{finding_id}] {short title}
description: {what is inconsistent and why it matters — include cross-reference to spec or other repo}
suggestion: {concrete fix instruction — what to change, what to match against}
Severity mapping from sync findings to review format:
| Sync Severity | Review Severity | Condition |
|---|---|---|
| critical | blocker | Missing API (symbol defined in spec but absent from implementation) |
| critical | critical | Signature mismatch, type mismatch, spec chain contradiction |
| warning | warning | Naming inconsistency, doc mismatch, missing README section |
| info | (skip) | Not included — info-level findings are not actionable bugs |
Rules:
file field MUST point to the implementation or doc file that needs changing (not the spec file)suggestion field MUST be concrete enough for fixbug to act on directly (e.g., "Rename findModule to getModule to match spec" rather than "fix naming")suggestionverified_api in the suggestionExample output:
# Project Review: {scope_description}
## Consistency
- severity: blocker
file: apcore-typescript/src/registry.ts
line: 1
title: [A-001] Missing API — Registry.scanDirectory()
description: Registry.scan_directory() is defined in apcore/docs/features/registry.md and implemented in apcore-python, but missing from apcore-typescript.
suggestion: Add `scanDirectory(path: string, options?: ScanOptions): Promise<Module[]>` method to Registry class, matching the spec signature.
- severity: critical
file: apcore-typescript/src/executor.ts
line: 42
title: [A-003] Param mismatch — Executor.execute() missing context param
description: Spec defines execute(moduleId, input, context?) but TypeScript implementation only has execute(moduleId, input). Missing optional context parameter.
suggestion: Add optional `context?: Context` as third parameter to `execute()` method.
- severity: critical
file: apcore/docs/prd.md
line: 87
title: [B-001] Spec chain contradiction — glob patterns
description: PRD §3.2 says "Registry supports glob patterns" but feature spec registry.md defines no glob parameter. Documents disagree.
suggestion: Remove glob pattern reference from PRD §3.2 to match feature spec, or add glob parameter to feature spec if the capability is intended.
- severity: warning
file: apcore-typescript/README.md
line: 35
title: [B-002] API reference mismatch — findModule vs getModule
description: README Quick Start uses `findModule()` but verified API says `getModule()`.
suggestion: Replace `findModule(` with `getModule(` in README Quick Start code example.
If no CRITICAL or WARNING findings exist, still output the header with a note:
# Project Review: {scope_description}
## Consistency
_(No actionable issues found — all checks passed.)_
Group all findings from both phases by repo. Spawn one Task(subagent_type="general-purpose") per repo that has fixable findings, all in parallel.
Sub-agent prompt for implementation repos:
Apply sync fixes for {repo_path} ({language}).
Phase A findings to fix:
{naming, missing, type issues from Phase A}
Phase B findings to fix:
{readme, api-ref, examples issues from Phase B}
Fix rules (apply in order):
PHASE A FIXES (code):
1. NAMING FIXES — For each naming inconsistency:
- Canonical name: {canonical} → language convention: {expected_name}
- Rename the function/method/class in its source file using Edit
- Update the export in __init__.py / index.ts
- Update any internal references within the same repo
2. MISSING API STUBS — For each missing symbol:
- Generate a stub implementation in {language} with TODO markers
- Match the signature from the spec (canonical form)
- Add the export to the main module file
- Create a corresponding test stub in tests/
3. VERIFY — After all Phase A fixes:
- Run the full test suite: {pytest --tb=short -q | npx vitest run}
- If any test fails due to a fix: revert ONLY that specific fix and note it
PHASE B FIXES (docs):
4. README FIXES — Add missing sections, update API names to match verified API, update version references
5. API REFERENCE FIXES — Update symbol names, param names, import paths in all markdown files
6. EXAMPLE FIXES — Update API usage and dependency versions in example code
7. CONTRADICTION FIXES — Resolve contradictions by aligning all docs to verified API
After all fixes:
1. List all files modified with a summary of changes
2. Do NOT commit — leave changes for user review
Error handling: If test runner is not available, skip verification and note it.
Return:
REPO: {repo-name}
PHASE_A_FIXES: {count} (naming: {n}, stubs: {n})
PHASE_B_FIXES: {count} (readme: {n}, api-refs: {n}, examples: {n}, contradictions: {n})
TEST_RESULT: {pass|fail|skipped}
TEST_COUNTS: {passed}/{total}
REVERTED_FIXES: {list or "none"}
FILES_MODIFIED: {list}
Sub-agent prompt for documentation repos:
Apply documentation consistency fixes for {doc_repo_path}.
Phase B findings to fix:
{spec chain contradictions, completeness gaps, cross-ref issues from Phase B}
Fix rules:
1. SPEC CHAIN CONTRADICTIONS — For each contradiction between documents:
- Identify which document is the higher-authority source:
Authority order: Feature Specs > Tech Design > SRS > PRD
(Feature specs are closest to implementation, PRD is most abstract)
- Update the lower-authority document to match the higher-authority one
- If the contradiction is between documents at the same level: flag for manual review, do NOT auto-fix
2. COMPLETENESS GAPS — For features mentioned in PRD/SRS but missing feature specs:
- Do NOT generate feature specs automatically (too complex for auto-fix)
- Add a TODO note in the appropriate location
3. CROSS-REFERENCE FIXES — Fix broken internal references:
- Update section references to point to correct locations
- Fix terminology inconsistencies to use the canonical name
After all fixes:
1. List all files modified
2. Do NOT commit
Error handling:
- If the doc repo path does not exist, return: DOC_REPO: {repo-name}, STATUS: NOT_FOUND
- If a file is unwritable, skip it and list in MANUAL_REVIEW_ITEMS
- If same-level contradiction (ambiguous authority): flag for manual review, do NOT auto-fix
Return:
DOC_REPO: {repo-name}
FIXES_APPLIED: {count}
FLAGGED_FOR_MANUAL_REVIEW: {count}
FILES_MODIFIED: {list}
MANUAL_REVIEW_ITEMS:
- {description of what needs human decision}
Wait for all sub-agents to complete. Display consolidated results:
Auto-fix results:
Implementation repos:
apcore-typescript: Phase A: 3 naming fixes, 1 stub | Phase B: 2 readme fixes
Tests: 287/287 ✓
Files: 5 changed
apcore-mcp-typescript: Phase A: 1 naming fix | Phase B: 0
Tests: 112/112 ✓
Files: 1 changed
Documentation repos:
apcore: 2 contradiction fixes, 1 flagged for manual review
Files: 2 changed
Manual review: PRD §3.2 vs feature spec — need human decision on glob pattern scope
Uncommitted changes — review with:
cd {repo} && git diff