Use this skill whenever a user needs help selecting, justifying, or evaluating research methods for anthropological or qualitative research. Triggers include: any mention of "methods," "methodology," "method selection," "which methods should I use," "how to choose methods," "how do I justify my methods," "method-stance alignment," "my reviewer says my methods don't match my theory," "multi-method design," "mixed methods in anthropology," or "what methods fit an interpretivist / critical / STS / feminist / phenomenological / applied / cognitive / linguistic / computational project." Also trigger when users ask about epistemic coherence between theory and methods, evidence types needed for a research question, how to compose a multi-method system, how to write a methods justification narrative, or how to handle data governance as a design decision. Covers all anthropological subfields and qualitative social science approaches. Do NOT use for writing a full research plan (use research-plan skill), grant proposals targeting a specific funder (use grant-proposal skill), or designing specific instruments like interview guides or surveys (use fieldwork-instruments skill when available). This skill handles the upstream design decision of which methods and why.
From ai-anthropologynpx claudepluginhub mattartzanthro/ai-anthropology-toolkit --plugin ai-anthropologyThis skill uses the workspace's default tool permissions.
references/method-modules.mdreferences/method-stance-compatibility.mdreferences/methodology-selection-guide.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Select and justify research methods for anthropological research by treating method choice as an epistemic-design problem: specifying a warranted path from an epistemic stance and research question to defensible claims, using evidence types the stance treats as meaningful, through a coherent multi-method system whose internal logic is explicit. Method selection is not "picking tools" — it is an argument about why these methods, for this question, from this stance, will produce the evidence needed to support the claims you intend to make.
| Task | Reference |
|---|---|
| Decision workflow, criteria, failure modes, checklist | Read references/methodology-selection-guide.md |
| Method-stance compatibility matrix, justification templates, worked examples | Read references/method-stance-compatibility.md |
| Method module details (evidence, claims, limitations, ethics), multi-method patterns | Read references/method-modules.md |
Determine the entry point:
Before generating any content, collect these inputs:
Required:
Important but can be inferred: 4. Scale and temporality. Small-N intensive, multi-population, longitudinal, cross-sectional? Affects the design logic. 5. Access constraints. Where observation is impossible or risky, trace and documentary methods become more central; where recruitment is constrained, sampling logic must shift. 6. Risk posture. Low-risk, vulnerable populations, high-surveillance, politically sensitive. Affects ethics and data governance requirements. 7. Resources, skills, time. Methods that cannot be implemented with rigor are not "best" methods. Short timelines may favor rapid assessment.
Helpful but not required:
references/methodology-selection-guide.md for the
decision workflow, criteria, and checklist.references/method-stance-compatibility.md when the user needs
stance-specific guidance: compatibility ratings, justification templates,
or worked examples.references/method-modules.md when comparing method options or
composing a multi-method system: evidence types, claims supported,
limitations, ethical considerations, and multi-method design patterns.Follow this sequence (detailed in the guide reference file):
Define the claim envelope. Based on the epistemic stance, state what kinds of claims are admissible and what kinds are not. An interpretivist project makes claims about meaning, not prevalence. A critical project makes claims about power, not neutral description.
Decompose the question into evidence needs. Translate the research question into required evidence types: embodied practices (requires observation), meaning-making (requires interpretive elicitation plus context), distributions (requires standardized measurement), discourse-in-use (requires recordings and transcription), historical sequence (requires archives), network/process across sites (requires multi-sited or trace strategies), materiality (requires object-oriented or sensory methods).
Generate candidate method modules. From the 14 method modules in the method-modules reference, identify which could produce the required evidence.
Check epistemic coherence. Using the compatibility matrix, rate each candidate method against the user's stance: Standard (S), Coherent (C), Innovative/defensible (I), or High-tension (T). Flag any T-rated methods and explain what reframing would be needed to make them defensible.
Check field constraints. Filter candidates by access, risk, consent feasibility, platform terms, legality, and resource availability.
Compose the multi-method system. Assign each surviving method a role: primary evidence generation, complementary perspective, contextualization, or validation. Ensure the system has internal logic — methods should relate to each other, not just coexist.
Specify the integration plan. State when and where evidence streams are joined, what analytic strategy governs integration, and what meta-inferences result. Do not use "triangulation" without specifying the type (data, method, theory) and what convergence or divergence means.
Produce one or more of these deliverables depending on user needs:
Before presenting output, verify using the full checklist:
| Failure mode | Prevention |
|---|---|
| Methods as grocery list — no inferential role specified | Require a role statement per method: evidence -> claim -> limitation |
| Generic justification — "participant observation is a hallmark of anthropology" | Enforce stance-and-question anchoring: why this method is necessary here |
| Stance-method mismatch hidden by vague language | Add claim envelope step; check compatibility matrix; flag T-rated methods |
| Integration left implicit — "triangulation" as magic word | Specify type of triangulation and what convergence/divergence means |
| Sample size by round number or unexamined "saturation" | Use information power or empirically grounded saturation reasoning |
| Ethics treated as appendix | Require ethics and data governance as design determinants, not afterthoughts |
Example 1: Selecting methods for an interpretivist project
Input: "I'm studying how gig workers make meaning out of algorithmic management. I'm an interpretivist drawing on practice theory. What methods should I use?"
Output approach:
Example 2: Selecting methods for a computational/digital project
Input: "I want to study how climate misinformation spreads in online communities. I'm coming from a computational/digital ethnography perspective. I'm thinking of scraping forum data and doing topic modeling."
Output approach:
Example 3: Checking stance-method coherence
Input: "I'm doing a feminist study of reproductive healthcare access but my advisor wants me to include a survey. Is that compatible with my framework?"
Output approach: