Validates adapter files against live platform documentation to detect stale mappings,missing features, and outdated version information
Validates adapter files against live platform documentation to detect stale mappings and missing features.
/plugin marketplace add sequenzia/agent-alchemy/plugin install agent-alchemy-plugin-tools@agent-alchemyThis skill is limited to using the following tools:
Validate an adapter file against live platform documentation to detect stale mappings, missing features, and outdated version information. Optionally apply updates in-place when the --update flag is provided.
CRITICAL: Complete ALL 4 phases. The workflow is not complete until Phase 4: Report & Apply is finished. After completing each phase, immediately proceed to the next phase without waiting for user prompts.
IMPORTANT: You MUST use the AskUserQuestion tool for ALL questions to the user. Never ask questions through regular text output.
Text output should only be used for:
If you need the user to make a choice or provide input, use AskUserQuestion.
NEVER do this (asking via text output):
Would you like to apply updates or export the report?
1. Apply updates
2. Export report
3. View details
ALWAYS do this (using AskUserQuestion tool):
AskUserQuestion:
questions:
- header: "Validation Complete"
question: "How would you like to proceed?"
options:
- label: "View detailed findings"
description: "Show full per-entry breakdown grouped by classification"
- label: "Apply updates"
description: "Update stale/missing entries in the adapter file"
- label: "Export report"
description: "Write validation report as a markdown file"
multiSelect: false
This skill can be invoked standalone via /validate-adapter or loaded as a sub-workflow by other skills (e.g., update-ported-plugin). When loaded as a sub-workflow:
SKIP_REPORT = true in context, stop after Phase 3 and return the validation report data structure to the callerSKIP_REPORT flag is present, execute all 4 phases (standalone behavior)CRITICAL: This skill performs an interactive validation workflow, NOT an implementation plan. When invoked during Claude Code's plan mode:
Execute these phases in order, completing ALL of them:
Goal: Parse arguments, load the adapter format specification, read the target adapter file, and extract its current state.
Parse $ARGUMENTS for:
--target <platform> -- Target platform slug (default: opencode)--update -- Enable in-place update mode (default: false)Set TARGET_PLATFORM from --target value (default: opencode).
Set UPDATE_MODE from --update flag (default: false).
Read the adapter format reference to understand the expected 9-section structure:
Read: ${CLAUDE_PLUGIN_ROOT}/references/adapter-format.md
Record the 9 required/optional sections:
Read the adapter file for the target platform:
Read: ${CLAUDE_PLUGIN_ROOT}/references/adapters/${TARGET_PLATFORM}.md
Error handling -- adapter not found:
If the adapter file does not exist:
Glob: ${CLAUDE_PLUGIN_ROOT}/references/adapters/*.md
AskUserQuestion:
questions:
- header: "Adapter Not Found"
question: "No adapter found for '${TARGET_PLATFORM}'. Which adapter would you like to validate?"
options:
- label: "{adapter-1}"
description: "Adapter file: {adapter-1}.md"
- label: "{adapter-2}"
description: "Adapter file: {adapter-2}.md"
multiSelect: false
ERROR: No adapter files found in ${CLAUDE_PLUGIN_ROOT}/references/adapters/.
Create an adapter first using /port-plugin or manually following the adapter format spec.
Parse all 9 sections from the adapter file by matching H2 (##) headers against the section names defined in the adapter format specification.
For each section found:
For each expected section NOT found:
From the Adapter Version section (Section 9), extract:
adapter_version -- semver version of this adapter filetarget_platform_version -- version of the platform the adapter was written forlast_updated -- ISO 8601 date of last review/updateIf any of these fields are missing, record them as unknown.
AskUserQuestion:
questions:
- header: "Adapter Validation Summary"
question: |
Adapter loaded for validation:
| Field | Value |
|-------|-------|
| Platform | ${TARGET_PLATFORM} |
| Adapter Version | ${adapter_version} |
| Target Platform Version | ${target_platform_version} |
| Last Updated | ${last_updated} |
| Sections Found | ${found_count} / 9 |
| Update Mode | ${UPDATE_MODE ? 'Enabled' : 'Disabled'} |
Proceed with platform research?
options:
- label: "Proceed"
description: "Spawn researcher agent to investigate current platform state"
- label: "Cancel"
description: "Cancel validation"
multiSelect: false
If the user selects "Cancel", exit the workflow.
Goal: Spawn the researcher agent to gather current platform documentation and compare against the adapter.
Spawn the researcher agent using the Task tool with subagent_type: "agent-alchemy-plugin-tools:researcher":
Task tool prompt:
"Research the current state of the plugin/extension system for: ${TARGET_PLATFORM}
You are validating an existing adapter file for this platform. The adapter was last updated on ${last_updated} and targets platform version ${target_platform_version}.
Focus your research on:
1. The current version of ${TARGET_PLATFORM} and any version changes since ${target_platform_version}
2. Plugin system changes, new features, or API modifications since the adapter was written
3. Deprecated or removed features that the adapter may still reference
4. New capabilities or configuration options not present in the adapter
The adapter file is located at: ${CLAUDE_PLUGIN_ROOT}/references/adapters/${TARGET_PLATFORM}.md
Read the adapter file and perform your Phase 4: Adapter Comparison against your research findings. Pay special attention to:
- Tool name changes or new tools added to the platform
- Directory structure changes
- Configuration format changes
- Model/provider changes
- Hook/lifecycle system changes
- Composition mechanism changes
Return a structured platform profile with your Adapter Comparison section fully populated."
Retry logic: If the Task tool call fails (agent error, timeout, or empty response), retry once automatically. If the retry also fails, proceed to research failure handling in Step 3.
When the researcher agent returns its platform profile:
RESEARCH_FINDINGSresearch_platform_version -- current platform version found by researchdocumentation_quality -- excellent/good/fair/poor/noneoverall_confidence -- high/medium/lowsources_consulted -- count and list of sourcesstale_mappings -- list from the Adapter Comparison sectionmissing_mappings -- list from the Adapter Comparison sectionResearch complete:
- Documentation quality: ${documentation_quality}
- Overall confidence: ${overall_confidence}
- Sources consulted: ${sources_count} (${official_count} official, ${community_count} community)
- Platform version found: ${research_platform_version} (adapter targets: ${target_platform_version})
If the researcher agent fails or returns empty results:
AskUserQuestion:
questions:
- header: "Research Failed"
question: "The researcher agent could not gather platform documentation. How would you like to proceed?"
options:
- label: "Retry research"
description: "Spawn the researcher agent again"
- label: "Manual validation"
description: "Skip research; validate adapter structure only (no staleness detection)"
- label: "Cancel"
description: "Cancel the validation workflow"
multiSelect: false
Goal: Compare each adapter section against research findings and classify every entry.
For each of the 9 adapter sections, compare the adapter content against the research findings.
Special handling for low-confidence research: If the researcher reported overall_confidence: low for an entire topic area, or if the Adapter Comparison section has no data for a given adapter section, mark the entire section as Uncertain rather than making unreliable per-entry classifications.
For each mapping/entry in each adapter section, classify it as one of:
| Classification | Meaning | Criteria |
|---|---|---|
| Current | Matches research findings | Adapter value aligns with what research found; no changes needed |
| Stale | Outdated information | Research has newer/different data for this entry; adapter needs updating |
| Missing | Not in adapter | Platform has features/capabilities that the adapter does not cover |
| Removed | No longer supported | Adapter references features the platform has deprecated or removed |
| Uncertain | Cannot determine | Research confidence is low; unable to reliably classify this entry |
Classification rules by section:
Platform Metadata:
documentation_url, repository_url, plugin_docs_url against research-discovered URLsnotes still accurately describe the platform statusDirectory Structure:
plugin_root, skill_dir, agent_dir, etc. against research-discovered pathsTool Name Mappings:
Model Tier Mappings:
Frontmatter Translations:
Hook/Lifecycle Event Mappings:
Composition Mechanism:
Path Resolution:
Adapter Version:
target_platform_version against research_platform_versionlast_updated age -- if older than 6 months, add a noteConstruct the validation report with the following structure:
Per-section breakdown:
| Section | Total Entries | Current | Stale | Missing | Removed | Uncertain |
|---------|--------------|---------|-------|---------|---------|-----------|
| Platform Metadata | N | N | N | N | N | N |
| Directory Structure | N | N | N | N | N | N |
| Tool Name Mappings | N | N | N | N | N | N |
| ... | ... | ... | ... | ... | ... | ... |
| **Totals** | **N** | **N** | **N** | **N** | **N** | **N** |
Overall health score:
Health Score = (Current entries / Total classified entries) * 100
Exclude Uncertain entries from the denominator (they neither help nor hurt the score). If all entries are Uncertain, report the health score as "N/A -- insufficient research data".
Stale entries detail list:
| Section | Entry | Current Value | Suggested Value | Source |
|---------|-------|--------------|-----------------|--------|
| {section} | {entry name} | {adapter value} | {research value} | {source URL or description} |
Missing entries detail list:
| Section | Feature | Description | Suggested Mapping |
|---------|---------|-------------|-------------------|
| {section} | {feature name} | {research description} | {suggested adapter value} |
Removed entries detail list:
| Section | Entry | Adapter Value | Evidence |
|---------|-------|--------------|----------|
| {section} | {entry name} | {current adapter value} | {evidence of removal from research} |
Store the complete report as VALIDATION_REPORT.
If SKIP_REPORT = true (sub-workflow mode), return VALIDATION_REPORT to the caller and stop here. Do not proceed to Phase 4.
Goal: Present the validation report to the user and optionally apply updates.
Determine the health indicator based on the health score:
Build the action options based on context:
UPDATE_MODE = true AND there are stale/missing/removed entriesUPDATE_MODE = false and there are findings, include a note that --update flag enables in-place updatesAskUserQuestion:
questions:
- header: "Adapter Validation Report"
question: |
## ${TARGET_PLATFORM} Adapter -- ${health_indicator}
**Health Score: ${health_score}%**
| Metric | Count |
|--------|-------|
| Current | ${current_count} |
| Stale | ${stale_count} |
| Missing | ${missing_count} |
| Removed | ${removed_count} |
| Uncertain | ${uncertain_count} |
${stale_count + missing_count + removed_count > 0 ?
stale_count + " stale, " + missing_count + " missing, and " + removed_count + " removed entries found." :
"All entries are current. No updates needed."}
How would you like to proceed?
options:
- label: "View detailed findings"
description: "Show full per-entry breakdown grouped by classification"
- label: "Apply updates"
description: "Update the adapter file with research-backed corrections (requires --update flag)"
- label: "Export report"
description: "Write validation report to a markdown file"
multiSelect: false
If the user selects "View detailed findings":
Present the full per-entry breakdown grouped by classification:
Stale Entries (if any): Show the stale entries detail list with current value, suggested value, and source.
Missing Entries (if any): Show the missing entries detail list with feature name, description, and suggested mapping.
Removed Entries (if any): Show the removed entries detail list with entry name, current value, and evidence.
Uncertain Entries (if any): Show uncertain entries with a note explaining the low research confidence.
Current Entries (if any): Show a summary count per section (do not list every current entry individually).
After presenting details, re-present the action choice:
AskUserQuestion:
questions:
- header: "Next Steps"
question: "What would you like to do with these findings?"
options:
- label: "Apply updates"
description: "Update the adapter file with research-backed corrections"
- label: "Export report"
description: "Write validation report to a markdown file"
- label: "Done"
description: "Exit without taking action"
multiSelect: false
If the user selects "Apply updates":
Guard check: If UPDATE_MODE = false, inform the user:
Update mode is not enabled. Re-run with the --update flag to apply changes:
/validate-adapter --target ${TARGET_PLATFORM} --update
Then return to the action choice prompt.
If UPDATE_MODE = true, proceed with updates:
Update stale entries: For each stale entry in the validation report:
{ section, entry, old_value, new_value }Add missing entries: For each missing entry in the validation report:
{ section, entry, new_value }Handle removed entries: For each removed entry, ask the user individually:
AskUserQuestion:
questions:
- header: "Removed Feature: ${entry_name}"
question: |
The adapter references "${entry_name}" in the ${section} section,
but research indicates this feature has been removed from ${TARGET_PLATFORM}.
Evidence: ${evidence}
What should be done with this entry?
options:
- label: "Remove entry"
description: "Delete this entry from the adapter"
- label: "Keep with deprecation note"
description: "Keep the entry but add a deprecation note"
multiSelect: false
[DEPRECATED] to the entry's notes columnUpdate metadata: In the Adapter Version section:
adapter_version patch number (e.g., 1.0.0 -> 1.0.1)last_updated to today's date (ISO 8601 format)target_platform_version to research_platform_version if it changedchangelog field:
${new_version} (${today}): Validation-driven update — ${stale_count} stale entries updated, ${missing_count} missing entries added, ${removed_count} removed entries handled.
Write updated adapter file:
Write: ${CLAUDE_PLUGIN_ROOT}/references/adapters/${TARGET_PLATFORM}.md
Display update summary:
Adapter updated successfully:
- Stale entries updated: ${updated_stale_count}
- Missing entries added: ${added_missing_count}
- Removed entries handled: ${handled_removed_count}
- New adapter version: ${new_version}
- Target platform version: ${research_platform_version}
If the user selects "Export report":
Determine the output directory:
OUTPUT_DIR was provided by a caller (sub-workflow mode), use itWrite the validation report as markdown:
Write: ${OUTPUT_DIR}/adapter-validation-${TARGET_PLATFORM}.md
The report file should include:
Inform the user of the export location:
Validation report exported to: ${OUTPUT_DIR}/adapter-validation-${TARGET_PLATFORM}.md
Handled in Phase 1 Step 3. The user is offered available alternatives or informed that no adapters exist.
Handled in Phase 2 Step 3. The user can retry, proceed with structural validation only, or cancel.
If the researcher returns a profile with all sections marked as "unknown" or confidence "low":
If the adapter file cannot be parsed (missing H2 headers, unrecognized structure):
The researcher agent is spawned via Task tool with subagent_type: "agent-alchemy-plugin-tools:researcher". The researcher uses Sonnet model and has access to WebSearch, WebFetch, Read, Glob, Grep, and Context7 tools.
Key coordination points:
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.