REQUIRED Phase 5 of /ds workflow (final). Checks reproducibility and requires user acceptance.
Final verification gate for data science workflows. Triggers at the end of `/ds` to re-run analysis fresh, demonstrate reproducibility, and conduct user acceptance interview before claiming completion.
/plugin marketplace add edwinhu/workflows/plugin install workflows@edwinhu-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Announce: "I'm using ds-verify (Phase 5) to confirm reproducibility and completion."
Final verification with reproducibility checks and user acceptance interview.
<EXTREMELY-IMPORTANT> ## The Iron Law of DS VerificationNO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION. This is not negotiable.
Before claiming analysis is complete, you MUST:
This applies even when:
If you catch yourself about to claim completion without fresh evidence, STOP. </EXTREMELY-IMPORTANT>
| Thought | Why It's Wrong | Do Instead |
|---|---|---|
| "Results should be the same" | "Should" isn't verification | Re-run and compare |
| "I ran it earlier" | Earlier isn't fresh | Run it again now |
| "It's reproducible" | Claim requires evidence | Demonstrate reproducibility |
| "User will be happy" | Assumption isn't acceptance | Ask explicitly |
| "Outputs look right" | "Look right" isn't verified | Check against criteria |
Before making ANY completion claim:
1. RE-RUN → Execute fresh, not from cache
2. CHECK → Compare outputs to success criteria
3. REPRODUCE → Same inputs → same outputs
4. ASK → User acceptance interview
5. CLAIM → Only after steps 1-4
Skipping any step is not verification.
CRITICAL: Before claiming completion, conduct user interview.
AskUserQuestion:
question: "Were there specific methodology requirements I should have followed?"
options:
- label: "Yes, replicating existing analysis"
description: "Results should match a reference"
- label: "Yes, required methodology"
description: "Specific methods were mandated"
- label: "No constraints"
description: "Methodology was flexible"
If replicating:
AskUserQuestion:
question: "Do these results answer your original question?"
options:
- label: "Yes, fully"
description: "Analysis addresses the core question"
- label: "Partially"
description: "Some aspects addressed, others missing"
- label: "No"
description: "Does not answer the question"
If "Partially" or "No":
/ds-implement to address gapsAskUserQuestion:
question: "Are the outputs in the format you need?"
options:
- label: "Yes"
description: "Format is correct"
- label: "Need adjustments"
description: "Format needs modification"
AskUserQuestion:
question: "Do you have any concerns about the methodology or results?"
options:
- label: "No concerns"
description: "Comfortable with approach and results"
- label: "Minor concerns"
description: "Would like clarification on some points"
- label: "Major concerns"
description: "Significant issues need addressing"
MANDATORY: Demonstrate reproducibility before completion.
# Run 1
result1 = run_analysis(seed=42)
hash1 = hash(str(result1))
# Run 2
result2 = run_analysis(seed=42)
hash2 = hash(str(result2))
# Verify
assert hash1 == hash2, "Results not reproducible!"
print(f"Reproducibility confirmed: {hash1} == {hash2}")
For notebooks:
# Clear and re-run all cells
jupyter nbconvert --execute --inplace notebook.ipynb
# or
papermill notebook.ipynb output.ipynb -p seed 42
| Claim | Required Evidence |
|---|---|
| "Analysis complete" | All success criteria verified |
| "Results reproducible" | Same output from fresh run |
| "Matches reference" | Comparison showing match |
| "Data quality handled" | Documented cleaning steps |
| "Methodology appropriate" | Assumptions checked |
These do NOT count as verification:
## Verification Report: [Analysis Name]
### Technical Verification
#### Outputs Generated
- [ ] Output 1: [location] - verified [date/time]
- [ ] Output 2: [location] - verified [date/time]
#### Reproducibility Check
- Run 1 hash: [value]
- Run 2 hash: [value]
- Match: YES/NO
#### Environment
- Python: [version]
- Key packages: [list with versions]
- Random seed: [value]
### User Acceptance
#### Replication Check
- Constraint: [none/replicating/required methodology]
- Reference: [if applicable]
- Match status: [if applicable]
#### User Responses
- Results address question: [yes/partial/no]
- Output format acceptable: [yes/needs adjustment]
- Methodology concerns: [none/minor/major]
### Verdict
**COMPLETE** or **NEEDS WORK**
[If COMPLETE]
- All technical checks passed
- User accepted results
- Reproducibility demonstrated
[If NEEDS WORK]
- [List items requiring attention]
- Recommended next steps
Only claim COMPLETE when ALL are true:
Both technical and user acceptance must pass. No shortcuts.
When user confirms all criteria are met:
Announce: "DS workflow complete. All 5 phases passed."
The /ds workflow is now finished. Offer to:
.claude/ files/dsThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.