From thinking
Apply the scientific method to any problem — define goal, hypothesise, experiment, measure, iterate. The meta-skill governing all structured investigation. Use when debugging, validating assumptions, or turning uncertainty into knowledge.
npx claudepluginhub hpsgd/turtlestack --plugin thinkingThis skill is limited to using the following tools:
Apply the scientific method to $ARGUMENTS. This is the universal cycle for turning uncertainty into knowledge.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Apply the scientific method to $ARGUMENTS. This is the universal cycle for turning uncertainty into knowledge.
GOAL → OBSERVE → HYPOTHESISE → EXPERIMENT → MEASURE → ANALYSE → ITERATE
What does success look like? Without clear success criteria, you cannot judge results.
### Goal definition
**Goal:** [specific, measurable outcome]
**Current state:** [where things are now — with numbers if possible]
**Target state:** [where they need to be — with numbers]
**How to measure:** [the specific metric or observation that proves success]
Rules for goals:
Output: Goal definition with current state, target state, and measurement method.
Gather data about the current state BEFORE forming hypotheses. Observation without hypothesis prevents confirmation bias.
### Observations
| # | Observation | Source | Surprising? |
|---|---|---|---|
| 1 | [what you actually see — not what you expect] | [where you found it] | Yes/No |
| 2 | [observation] | [source] | [surprise?] |
**What has been tried before:** [previous attempts and their outcomes]
**What measurements exist:** [available data, logs, metrics]
**What's missing:** [data you wish you had but don't]
Rules for observation:
Output: Observation table with sources and surprise flags.
Generate MULTIPLE hypotheses, not just one. A single hypothesis is confirmation bias waiting to happen.
### Hypotheses
| # | Hypothesis | If true, expect to see | If false, expect to see | Likelihood |
|---|---|---|---|---|
| H1 | [specific claim about what's happening and why] | [predicted observation] | [predicted observation] | High/Medium/Low |
| H2 | [alternative explanation] | [prediction] | [prediction] | [likelihood] |
| H3 | [another possibility] | [prediction] | [prediction] | [likelihood] |
Rules for hypotheses:
Output: Hypothesis table with predictions and falsification criteria.
For the highest-likelihood hypothesis, design the smallest test that would confirm or refute it:
### Experiment design
**Testing hypothesis:** H[N]
**Variable (what changes):** [the one thing you're changing]
**Control (what stays the same):** [everything else]
**Measurement:** [how you'll know the result]
**Expected result if hypothesis is correct:** [specific outcome]
**Expected result if hypothesis is wrong:** [specific outcome]
**Time budget:** [maximum time before concluding]
Rules for experiments:
Output: Experiment design with single variable, control, and predictions.
Run the experiment and record results:
### Results
**What happened:** [factual description of outcome]
**Expected outcome matched:** Yes / No / Partially
**Quantitative result:** [numbers if applicable]
**Unexpected observations:** [anything you didn't predict]
Rules for measurement:
Output: Results with comparison to predictions.
Compare results to the goal:
### Analysis
**Hypothesis H[N] status:** Confirmed / Refuted / Inconclusive
**Distance from goal:** [how far current state is from target state]
**What we learned:** [new knowledge gained — the actual value of this cycle]
**What we still don't know:** [remaining uncertainty]
Output: Hypothesis verdict and remaining uncertainty.
Based on analysis, decide next action:
/first-principles. The problem framing may be wrong.Output: Decision on next action with reasoning.
For debugging (the 15-minute rule):
## Investigation: [problem]
### Goal
[Goal definition from Step 1]
### Observations
[Observation table from Step 2]
### Hypotheses
[Hypothesis table from Step 3]
### Experiment
[Design from Step 4]
### Results
[Measurements from Step 5]
### Analysis
[Verdict from Step 6]
### Next Action
[Decision from Step 7]
/first-principles — when the scientific method reveals that the problem framing itself is wrong. Decompose and rebuild./algorithm — for systematic execution once you know WHAT to do. Scientific method figures out what's true; algorithm executes the plan./learning — capture experimental results as learnings for future reference.