From dx-core
Check Definition of Done compliance and auto-fix gaps — reviews PR, tasks, docs. Posts summary to ADO.
npx claudepluginhub easingthemes/dx-aem-flow --plugin dx-coreThis skill is limited to using the following tools:
Read `shared/provider-config.md` for provider detection and tool mapping.
Reviews code implementation against task file requirements, extracting and verifying every spec scenario (WHEN/THEN) and Done When criterion. Identifies and reports gaps before shipping.
Generates step-by-step guidance, production-ready code, and configurations for definition of done generator operations in enterprise workflows. Useful for project management, compliance, governance, and integration patterns.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Read shared/provider-config.md for provider detection and tool mapping.
Read .ai/config.yaml:
tracker.provider (or scm.provider for backward compat) — ado (default) or jiraIf provider = ado:
scm.orgscm.projectIf provider = jira:
jira.urljira.project-keyfields.status.name to check the equivalent status.You check whether a work item meets the Definition of Done. You fetch the DoD checklist from the wiki, check ADO/Jira state and codebase via MCP, and produce dod.md — a structured pass/fail report.
This validates the actual deliverables (code, tests, PR, build) — not agent workflow artifacts. Works the same whether the story was implemented manually or via the AI workflow.
Use ultrathink for this skill — cross-referencing multiple sources (ADO/Jira state, codebase, PR status) requires systematic verification.
Before creating tasks, use TaskList to check for existing tasks from a previous run. If stale tasks exist, cancel them first with TaskUpdate (status: cancelled). Then create tasks using TaskCreate. Mark each in_progress when starting, completed when done.
Parse the argument to extract the ADO work item ID (from number or URL).
If no argument provided:
SPEC_DIR=$(bash .ai/lib/dx-common.sh find-spec-dir 2>/dev/null)
Extract ID from the spec directory name. If no spec dir found, ask the user for the work item ID.
Read .ai/config.yaml for:
scm.org — ADO org URLscm.project — ADO project namescm.repo-id — repository IDscm.wiki-dod-url — DoD wiki page URL (required)build.command — build commandRead scm.wiki-dod-url from .ai/config.yaml.
If not configured: Print error and STOP:
✗ scm.wiki-dod-url not configured in .ai/config.yaml.
Add the wiki URL and re-run. See docs/authoring/wiki-checklist-format.md for page format requirements.
Fetch the wiki page content via MCP:
mcp__ado__wiki_get_page_content
url: <scm.wiki-dod-url>
If fetch fails: Print error and STOP:
✗ Could not fetch DoD wiki page from <url>.
Verify the URL is correct and the wiki page exists.
If confluence.dod-page-title is configured in .ai/config.yaml:
mcp__atlassian__confluence_search
cql: "title = '<confluence.dod-page-title>' AND space = '<confluence.space-key>'"
Extract the page ID, then fetch:
mcp__atlassian__confluence_get_page
page_id: "<page ID from search>"
If confluence.dod-page-title is NOT configured, fall back to .ai/rules/dod-checklist.md. If neither exists, validate against built-in DoD criteria only.
The wiki page uses a standard format (see docs/authoring/wiki-checklist-format.md):
## N. Section Title headings define sectionsCriterion | Who checks | What to verifyWho checks values: Agent (automated), Human (manual), Advisory (warn-only)**Skip trigger:** paragraphs define when to skip criteria## DoD Completion Summary section with scoring rulesParse the wiki content to extract:
If a spec directory exists for this work item:
dod.md exists in itdod.md already up to date — skipping and STOPdod.md exists but is outdated — regeneratingCollect evidence for each criterion parsed from the wiki. Evidence comes from the actual project state, not from agent workflow files.
From ADO (via MCP):
vstfs:///Git/PullRequestId/ links
From codebase:
**/test/**, **/tests/**, **/*Test.java, **/*.test.js, **/*.spec.js)From Figma (optional):
For each criterion parsed from the wiki, evaluate as PASS, FAIL, WARN, or SKIP:
Map each wiki criterion to the appropriate evidence check based on the "What to verify" column. The verification instructions in the wiki are human-readable — interpret them to determine what ADO state, codebase patterns, or PR data to check.
Write dod.md to the spec directory (create spec dir if needed).
Read .ai/templates/spec/dod.md.template and follow that structure, adapting to the wiki-driven criteria:
If dry run: print "Dry run — skipping ADO comment" and show what would be posted. Skip the rest of this step.
Before posting, check for an existing comment to avoid duplicates:
Fetch existing comments:
mcp__ado__wit_list_work_item_comments
project: "<ADO project>"
workItemId: <id>
Comments are included in the jira_get_issue response. Fetch the issue and search fields.comment.comments[].body for the signature:
mcp__atlassian__jira_get_issue
issue_key: "<issue key>"
Search for a comment containing the signature DoD Check: with the work item title.
If found:
ADO comment already up to date — skipping and skip### DoD Updated
**Verdict:** <new verdict> (was <old verdict>)
**Score:** <new score> (was <old score>)
**Changes:** <1-2 bullet summary of what changed>
If not found → post full comment using template.
Post:
mcp__ado__wit_add_work_item_comment
project: "<ADO project>"
workItemId: <id>
text: "<condensed dod results>"
format: "markdown"
mcp__atlassian__jira_add_comment
issue_key: "<issue key>"
comment: "<condensed dod results>"
Read .ai/templates/ado-comments/dod-summary.md.template and follow that structure.
If all criteria passed, skip to step 9.
If DX_PIPELINE_MODE is set:
$SPEC_DIR/research.md has a ## Cross-Repo Scope sectionshared/repo-discovery.md):
delegate.json and STOPIf DX_PIPELINE_MODE is not set: skip (local mode).
Classify each FAIL criterion:
Auto-fixable (agent can fix directly):
share-plan.md) → generate from explain.mdNeeds-human (create Task work items):
For each auto-fixable failure:
Fixing: <criterion> — <action>Fixed: <criterion> or Fix failed: <criterion> — <reason>Track results: {criterion, action, result: fixed|failed|skipped}.
For each needs-human failure, create a child Task work item:
If provider = ado:
[DoD] <criterion description>DoD check for ADO #<parent-id> failed: <failure details>\n\nHow to fix: <actionable instruction>If provider = jira:
mcp__atlassian__jira_create_issue
project_key: "<jira.project-key>"
issue_type: "<jira.child-issue-type>"
summary: "[DoD] <criterion description>"
parent_key: "<parent issue key>"
description: "DoD check for <parent issue key> failed: <details>\n\nHow to fix: <instruction>"
Track created tasks: {criterion, taskId, title}.
If auto-fixes improved the score (at least one criterion flipped to PASS):
/dx-doc-gen <work-item-id> if available/aem-doc-gen <work-item-id> if available (AEM project)Documentation generation skipped (doc-gen skills not installed).Re-run DoD criteria evaluation (same as step 5) against updated evidence. Update dod.md with new results. Track the delta:
## DoD Check: <Title> (ADO #<id>)
**Verdict:** <PASS/FAIL>
**Score:** <N>/<total automated criteria>
**Failures:** <count or "none">
**Warnings:** <count or "none">
**DoD Source:** <wiki URL>
<If auto-fixes were attempted:>
### Auto-Fixed
| Criterion | Action | Result |
|-----------|--------|--------|
| <criterion> | <what was done> | Fixed / Failed |
### Tasks Created (needs human)
**If provider = ado:**
| Criterion | Task | Assigned |
|-----------|------|----------|
| <criterion> | ADO #<task-id>: <title> | <assignee or unassigned> |
**If provider = jira:**
| Criterion | Task | Assigned |
|-----------|------|----------|
| <criterion> | [<issue-key>]({jira.url}/browse/<issue-key>): <title> | <assignee or unassigned> |
### Updated Score
**Before:** <N>/<total> → **After:** <M>/<total>
<If all pass after fixes:>
**Verdict: Ready for QA** — all DoD criteria now met.
<If still failing after fixes:>
**Verdict: <M>/<total> passed.** <count> tasks created for remaining items.
<If no fixes needed (all passed initially):>
### Recommended action
No action needed — all automated DoD criteria passed.
/dx-req-dod 2435084
Fetches DoD checklist from wiki, checks PR status/votes/threads, tests, build, accessibility, child tasks. Produces dod.md with PASS/FAIL for each criterion. Auto-fixes what it can and creates Task work items for the rest.
/dx-req-dod https://dev.azure.com/myorg/MyProject/_workitems/edit/2435084
Extracts ID from URL. Same check.
Cause: The DoD wiki URL is not set in .ai/config.yaml.
Fix: Add wiki-dod-url under the scm: section. See docs/authoring/wiki-checklist-format.md for how to create the wiki page.
Cause: No PR has been created or linked to this work item. Fix: Create a PR and link it to the work item in ADO.
Cause: No test files found for the changed components. Fix: Write tests for the implementation. If the change is config/content only, this may be skippable.
Cause: The wiki page doesn't follow the expected format.
Fix: See docs/authoring/wiki-checklist-format.md for the required page structure.
Cause: The fix was applied but verification requires additional conditions (e.g., tests must actually pass, not just exist).
Fix: Review generated test stubs or documentation, fill in real content, and re-run /dx-req-dod.
Cause: ADO MCP call succeeded for creation but parent link failed (permissions or API issue). Fix: Manually link the created Tasks to the parent story in ADO, or re-run (it checks for existing tasks before creating duplicates).