Use when conducting customer discovery interviews, user research, surveys, focus groups, or observational research requiring rigorous analysis - provides systematic 6-phase framework with mandatory bias prevention (reflexivity, intercoder reliability, disconfirming evidence search) and reproducible methodology; peer to hypothesis-testing for qualitative vs quantitative validation
Conducts rigorous qualitative research with mandatory bias prevention and verification checkpoints. Use when running customer discovery interviews, user research, or analyzing survey responses requiring systematic analysis with intercoder reliability and disconfirming evidence searches.
/plugin marketplace add tilmon-engineering/claude-skills/plugin install datapeeker@tilmon-eng-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
templates/focus-groups/phase-1-facilitator-guide.mdtemplates/focus-groups/phase-2-session-execution.mdtemplates/interviews/phase-1-interview-guide.mdtemplates/interviews/phase-2-interview-execution.mdtemplates/observations/phase-1-observation-protocol.mdtemplates/observations/phase-2-field-work.mdtemplates/overview-summary.mdtemplates/phase-3-familiarization.mdtemplates/phase-4-coding.mdtemplates/phase-5-themes.mdtemplates/phase-6-reporting.mdtemplates/surveys/phase-1-survey-design.mdtemplates/surveys/phase-2-survey-distribution.mdtests/CLAUDE.mdtests/baseline-results.mdtests/green-results.mdtests/rationalization-patterns.mdtests/scenario-1-skip-bias-documentation.mdtests/scenario-2-skip-intercoder-reliability.mdtests/scenario-3-skip-disconfirming-evidence.mdSystematic framework for conducting and analyzing qualitative research (interviews, surveys, focus groups, observations) with rigorous bias prevention and reproducible methodology.
Core principle: Rigor through mandatory checkpoints. Prevent confirmation bias by enforcing disconfirming evidence search, intercoder reliability, and reflexivity documentation.
Peer to hypothesis-testing: hypothesis-testing validates quantitative hypotheses with data analysis. qualitative-research validates qualitative hypotheses with systematic interview/survey analysis.
Use this skill when:
When NOT to use:
YOU MUST use TodoWrite to track progress through all 6 phases.
Create todos at the start:
- Phase 1: Research Design (question, method, instrument, biases) - pending
- Phase 2: Data Collection (execute protocol, track saturation) - pending
- Phase 3: Data Familiarization (immerse without coding) - pending
- Phase 4: Systematic Coding (codebook, reliability check) - pending
- Phase 5: Theme Development (build themes, search disconfirming evidence) - pending
- Phase 6: Synthesis & Reporting (findings, limitations, follow-ups) - pending
Update status as you progress. Mark phases complete ONLY after checkpoint verification.
Flexible Entry: If user has existing data (transcripts, survey responses), can start at Phase 3. Verify raw data exists in raw-data/ directory.
CHECKPOINT: Before proceeding to Phase 2, you MUST have:
01-research-design.mdSelect method and load appropriate template:
templates/interviews/phase-1-interview-guide.mdtemplates/surveys/phase-1-survey-design.mdtemplates/focus-groups/phase-1-facilitator-guide.mdtemplates/observations/phase-1-observation-protocol.mdDocument reflexivity baseline (MANDATORY):
This is NON-NEGOTIABLE. Before any data collection, write down:
Why this matters: If you don't document biases BEFORE data collection, you cannot identify confirmation bias AFTER.
Templates enforce neutral question design. Common mistakes:
Save to 01-research-design.md using template
STOP and verify checkpoint: Cannot proceed to Phase 2 until reflexivity baseline documented.
Why this is wrong: Everyone has assumptions. If you can't name them, they're controlling you invisibly.
Do instead: Write one sentence: "I believe [X] because [Y]." That's your bias. Document it.
Why this is wrong: Expert opinion IS a bias that must be documented. Authority backing is a strong prior.
Do instead: "Expert A said B. This is my assumption going in. Must verify with data."
Why this is wrong: Documenting assumptions takes 5 minutes. Presenting biased findings wastes hours.
Do instead: Set timer for 5 minutes. Write down assumptions. Move on.
CHECKPOINT: Before proceeding to Phase 3, you MUST have:
raw-data/ directory02-data-collection-log.mdExecute method-specific protocol:
Track toward saturation:
Saturation = when new insights stop emerging
After each interview/session/survey batch, ask:
Document in collection log. Plan to continue until 2-3 consecutive instances add nothing new.
After each data collection instance, write:
Why this matters: Reflexivity tracks how your interpretation changes. Prevents retroactively fitting data to initial beliefs.
File structure:
raw-data/
├── transcript-001.md
├── transcript-002.md
├── ...
OR for surveys:
raw-data/
├── survey-responses-batch-1.md
├── survey-responses-batch-2.md
One file per interview/session. Numbered sequentially.
Save collection log to 02-data-collection-log.md
STOP and verify checkpoint: Cannot proceed to Phase 3 until minimum sample collected and raw data captured.
CHECKPOINT: Before proceeding to Phase 4, you MUST have:
03-familiarization-notes.mdThis is critical: Do NOT start coding yet. Just read and observe.
Why: Premature coding locks you into first impressions. Familiarization lets patterns emerge naturally.
Invoke: analyze-transcript agent
Input: transcript-001.md through transcript-010.md
Output: Summary, key quotes, initial observations per transcript
Agent prevents context pollution. Returns structured observations for your review.
03-familiarization-notes.md:Format:
Why this is wrong: Coding while familiarizing locks you into first impressions. Patterns shift after full dataset review.
Do instead: Finish familiarization completely. Then start fresh with coding.
CHECKPOINT: Before proceeding to Phase 5, you MUST have:
04-coding-analysis.mdInvoke: generate-initial-codes agent
Input: 2-3 transcripts or data segments
Output: Suggested codes with definitions and examples
Review agent suggestions. Refine codes. Create codebook.
For each code:
Work through raw data files sequentially. Apply codes from codebook. Document any new codes discovered (add to codebook with rationale).
Invoke: intercoder-reliability-check agent
Input: Codebook + 2 transcripts (10-20% of dataset)
Output: Independent coding + agreement analysis
This step is REQUIRED. Cannot skip. Cannot defer. Cannot substitute with user review.
Why: Even clear codebooks have subjective judgment. Second coder catches systematic bias in code application.
04-coding-analysis.md:Sections:
Why this is wrong: "Straightforward" is subjective. Even clear codes have interpretation variance.
Do instead: If coding is straightforward, intercoder reliability will be high and quick. Do the check.
Why this is wrong: Presenting flawed findings takes more time to fix than 1-hour verification.
Do instead: Verification takes 1 hour. Fixing flawed findings after presentation takes days. Do the math.
Why this is wrong: User can't catch their own interpretation bias. Second coder does.
Do instead: User review is pre-flight check. Intercoder reliability is the actual test. Both required.
Why this is wrong: After themes developed, reliability check invalidates hours of work if problems found.
Do instead: Reliability MUST be verified in Phase 4, not Phase 6. Do it now.
CHECKPOINT: Before proceeding to Phase 6, you MUST have:
05-theme-development.mdInvoke: identify-themes agent
Input: Codebook + all coded segments
Output: Potential themes with supporting codes and data extracts
Review agent suggestions. Refine theme definitions.
For EACH theme, you MUST run:
Invoke: search-disconfirming-evidence agent
Input: Theme definition + full dataset
Output: Contradictory evidence, edge cases, exceptions to pattern
This is REQUIRED. No exceptions. No shortcuts. No "pattern is obvious so no need."
Why: Clear patterns are MOST vulnerable to confirmation bias. Obvious themes need MOST rigorous verification.
For each theme, explain:
Example:
Theme 1: "Cost concerns are primary barrier" - 8 of 10 participants
NEGATIVE CASES:
- Participant 3: Didn't mention cost. Focused entirely on integration complexity.
- Participant 7: Said price was "not a concern if it solves the problem"
EXPLANATION: Theme applies to majority but not universal. Subset willing to pay premium for right solution.
After seeing contradictions, revise theme definitions for accuracy. "8 of 10" is more honest than "all participants."
Invoke: extract-supporting-quotes agent
Input: Theme definition + coded dataset
Output: Best representative verbatim quotes for each theme
05-theme-development.md:Format:
Why this is wrong: Majority agreement doesn't eliminate contradictory evidence. Must explain ALL data.
Do instead: "8 of 10 mentioned cost. What about the 2 who didn't? Must explain."
Why this is wrong: Expert prediction + matching findings = confirmation bias red flag, not validation.
Do instead: When predictions match findings perfectly, search HARDEST for contradictions.
Why this is wrong: High unanimity can indicate leading questions or selective interpretation.
Do instead: Real customer sentiment is messy. 9/10 agreement deserves scrutiny, not celebration.
Why this is wrong: Obvious patterns are MOST vulnerable to confirmation bias.
Do instead: Obvious patterns require MOST rigorous disconfirmation. Search is mandatory.
CHECKPOINT: Before marking complete, you MUST have:
06-findings-report.md and 00-overview.md updatedStructure:
You MUST address:
Why: Acknowledging limitations STRENGTHENS credibility. False certainty undermines trust.
Rate each: High / Medium / Low. Provide justification.
Every analysis should raise new questions:
00-overview.md with summary:Add final summary section with:
Save to 06-findings-report.md
Mark Phase 6 complete: All checkpoints verified.
Why this is wrong: Stating limitations INCREASES credibility. Readers trust honest uncertainty.
Do instead: State limitations clearly. Be honest about what you don't know.
These are violations of skill requirements:
| Excuse | Reality |
|---|---|
| "I don't have biases to document" | Everyone has assumptions. If you can't name them, they're controlling you invisibly. |
| "Expert opinion reduces need for bias documentation" | Expert opinion IS a bias. Authority backing is a strong prior that MUST be documented. |
| "Time pressure justifies skipping formal process" | Documenting assumptions takes 5 minutes. Presenting biased findings wastes hours. |
| "Coding was straightforward, low risk" | "Straightforward" is subjective. Even clear codes have interpretation variance. |
| "Time constraints justify skipping verification" | Verification takes 1 hour. Fixing flawed findings after presentation takes days. |
| "Informal spot-check is sufficient" | Spot-checks catch obvious errors. Intercoder reliability catches systematic bias. Both required. |
| "User reviewed coding, enough validation" | User can't catch their own interpretation bias. Second coder does. Non-negotiable. |
| "Can do reliability check later if needed" | After themes developed, reliability check invalidates hours of work. Do it in Phase 4. |
| "Themes clearly supported by majority" | Majority agreement doesn't eliminate contradictory evidence. Must explain ALL data. |
| "Expert prediction validates findings" | When predictions match findings perfectly, that's when to search hardest for contradictions. |
| "High consistency (8/10, 9/10) indicates robustness" | Real customer sentiment is messy. 9/10 agreement deserves scrutiny. |
| "Disconfirming evidence search unnecessary for obvious patterns" | Obvious patterns MOST vulnerable to confirmation bias. Search is mandatory. |
| "Limitations undermine findings" | Stating limitations INCREASES credibility. False certainty undermines trust. |
| "This is just initial/exploratory research" | Exploratory means open-ended questions. Doesn't mean skip rigor. Follow the phases. |
| "I'm following the spirit of the rules" | Violating checkpoints violates both letter AND spirit. No shortcuts. |
All of these mean: Checkpoint violated. Cannot proceed.
If you catch yourself thinking ANY of these, you are rationalizing. STOP and follow the checkpoint:
All of these mean: Violated skill requirements. Go back and complete checkpoint.
This skill ensures rigorous, reproducible qualitative research by:
Follow this process and you'll produce defensible, credible qualitative research that stands up to scrutiny.
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.