From scientific-method
Synthesize experiment results for a hypothesis into a verdict: confirmed, refuted, or inconclusive. Writes conclusion into the hypothesis file. Use this skill after running-experiments has filled in results, or whenever the user wants to evaluate whether a hypothesis held up, assess experiment outcomes, or decide what a research iteration means for the original problem.
npx claudepluginhub pipemind-com/pipemind-marketplace --plugin scientific-methodThis skill is limited to using the following tools:
This is the judgment step of the scientific method loop. After experiments have run and produced results, this skill reads the evidence honestly and delivers a verdict. The goal is intellectual honesty: a confirmed hypothesis that was never seriously challenged is worthless, and a refuted hypothesis that narrows the search space is valuable. The verdict must follow from the evidence, not from w...
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This is the judgment step of the scientific method loop. After experiments have run and produced results, this skill reads the evidence honestly and delivers a verdict. The goal is intellectual honesty: a confirmed hypothesis that was never seriously challenged is worthless, and a refuted hypothesis that narrows the search space is valuable. The verdict must follow from the evidence, not from what would be convenient.
hypothesis-NN.mdRead the hypothesis file. If it already contains a ## Conclusion section with a verdict, stop -- this hypothesis has already been concluded. The file is the checkpoint.
For each experiment in the hypothesis file, extract:
Then determine the aggregate verdict. The reasoning here matters more than a mechanical formula, but these principles guide the judgment:
If every experiment was skipped or not-runnable, the verdict is inconclusive with a clear note that the hypothesis could not be tested with available tools.
Read problem.md in the parent directory. The conclusion is only useful if it feeds back into the original problem, so ground the verdict in the problem's success criteria:
Append to the hypothesis file:
## Conclusion
**Verdict:** <confirmed | refuted | inconclusive>
**Reasoning:** <2-4 sentences explaining why this verdict follows from the experiment results. Be specific -- cite which experiments and what their evidence strengths were. Do not just summarize outcomes; explain why the aggregate picture leads to this verdict.>
**Implication for the problem:** <What this means for the original problem. Connect back to the success criteria from problem.md.>
**Rigor:** <1-3 sentences assessing whether the evidence meets publication-grade standards: reproducibility of methodology, honesty of reporting, appropriate statistical treatment, and any methodological limitations.>
**Novelty (confirmed verdicts only):** <1-2 sentences assessing whether the confirmed result reproduces existing knowledge or contributes something new. Draw from the **Novelty:** tags recorded in experiment Results sections; if none were recorded, assess directly from the Literature section. Use the scale: novel / incremental / replication. Omit this field for refuted/inconclusive verdicts.>
**Follow-up questions:**
- <Question raised by this result that the next iteration should address>
- <...>
Update the ## Status line in the hypothesis file to reflect the verdict:
## Status
<confirmed | refuted | inconclusive>
Use Edit to find and replace the current status value. Do not modify any other content.