Academic research methodology guardian. Ensures agents working on empirical research maintain methodological integrity: research questions drive all design decisions, methods are appropriate and justified, data collection quality is verified before proceeding, and convenience shortcuts that compromise validity are caught and refused.
From aops-corenpx claudepluginhub nicsuzor/aopsThis skill is limited to using the following tools:
axioms.mdinstructions/experiment-logging.mdinstructions/methodology-files.mdinstructions/methods-vs-methodology.mdreferences/assumptions_and_diagnostics.mdreferences/bayesian_statistics.mdreferences/effect_sizes_and_power.mdreferences/reporting_standards.mdreferences/statistical-analysis.mdreferences/test_selection_guide.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
This skill provides methodological judgment for academic research. It is concerned with WHETHER the research is sound, not HOW to run a specific tool. For dbt/Streamlit workflows, see the analyst skill. For statistical test selection and reporting, see references/statistical-analysis.md.
Agents are dangerous research assistants. They are fast, fluent, and eager to declare success. Left unsupervised, they will:
This skill exists to make agents think like researchers, not like engineers completing a ticket.
Every decision in a research project must be traceable to a research question.
When an agent proposes to do something — drop a model, change a threshold, skip a validation step, modify a sample — it must answer: "How does this serve the research question?" If it can't answer that, it doesn't do it.
Before any analysis work, the agent MUST know:
If the project has a METHODOLOGY.md, read it. If not, ask the user to articulate the research question before proceeding. Do not infer research questions from the data structure.
Every methodological choice must be justified in terms of the research question, not in terms of what's easy or available.
Examples of what this means in practice:
The agent MUST NOT:
The agent MUST:
NEVER sign off on a full-scale run based on a dry run that only checked whether output was produced.
A dry run / pilot exists to answer: "Are the results USEFUL for answering our research question?" Not: "Did the code run without errors?"
A proper dry-run quality audit MUST include:
Specifically prohibited:
Source datasets, ground truth labels, experimental records, and research configurations are immutable. NEVER modify, reformat, or "fix" them. If infrastructure doesn't support a format: HALT and report. Violations are scholarly misconduct.
Every non-trivial decision must be recorded with its justification:
See instructions/methodology-files.md for the METHODOLOGY.md structure. See instructions/methods-vs-methodology.md for the distinction between research design (methodology) and technical implementation (methods).
When an agent suggests removing a condition, model, variable, or subset:
When an agent reports positive results:
When an agent declares a pipeline, method, or tool "working":
When an agent suggests a more "efficient" approach:
This skill should be active whenever:
When working on a research report, agents default to "deliverable completion" mode: batch up improvements, present a polished result, check the box. This is the wrong frame.
A research report is an argument, not a document. Each chapter answers a specific question. Each section is a step in that argument. Each visualisation must earn its place by advancing the argument — if it doesn't, it's noise regardless of how informative it looks in isolation.
Symptoms of deliverable-mode thinking:
What to do instead:
See instructions/methodology-files.md and instructions/methods-vs-methodology.md for the canonical documentation structure. The key distinction:
Status: v0.1.1 — Added report-as-argument guidance. v0.1.0: Initial extraction from analyst skill. v0.1.1: Added "Let me finish this chapter" anti-pattern from TJA session learnings. Agents default to deliverable-completion mode when working on research reports; this section captures the dual-level engagement pattern (granular detail + big-picture methodology) that academic research requires. This skill needs further research to identify what methodological knowledge should be encoded here vs. left to project-specific context. See PKB task for the research agenda.