From code-foundations
Decomposes ambiguous user requests via structured brainstorming and hypothesis-driven questions to detect underspecification, false premises, and multiple interpretations. For unclear coding tasks.
npx claudepluginhub ryanthedev/code-foundationsThis skill uses the workspace's default tool permissions.
Understand what the user needs before committing to work.
Clarifies vague user requests via iterative Q&A loop and parallel subagent codebase exploration. Outputs scoped context brief for precise planning. Triggers on 'I want to...' or ambiguous scopes.
Asks targeted clarifying questions for underspecified requests to confirm objectives, scope, constraints, environment, and safety before implementing code.
Asks targeted clarifying questions when requests lack clear objectives, scope, constraints, environment, or acceptance criteria before implementing.
Share bugs, ideas, or general feedback.
Understand what the user needs before committing to work.
LLMs default to assuming rather than asking — even frontier models proceed without clarification in 70% of cases where information is missing. This skill counteracts that bias by classifying what's unclear and generating targeted clarifying questions.
Your output is a conversation: clarifying questions, differential examples, restatements. Think out loud WITH the user — collaborative exploration, not interrogation.
Not all gaps are the same. Classifying the type determines what kind of question to ask.
Intention faults — The real goal isn't recoverable from the request.
Premise faults — An assumption in the request is wrong.
Parameter faults — Required details are missing or conflicting.
Expression faults — The language prevents unique interpretation.
Once you've identified a gap, classify which direction it pulls — this shapes your question:
| Direction | Signal | Clarification Action |
|---|---|---|
| Semantic | Key terms have multiple valid meanings | Disambiguate: "do you mean A or B?" |
| Too broad | Clear intent but scope is huge | Specify: "which part matters most right now?" |
| Too narrow | Request is oddly specific for the likely goal | Generalize: "what's the broader outcome you're after?" |
Three concrete ways a coding request becomes ambiguous:
Inconsistencies between requirements are the hardest to detect. Explicitly check whether parts of the request conflict with each other.
Don't start with "what should I ask?" Start with "what are the plausible interpretations?" Then find the question whose answer eliminates the most of them.
Example:
One question targeting the axis of disagreement beats three questions about implementation details.
Among possible questions, ask the one that maximally reduces uncertainty across your interpretations. If question A would split your hypotheses 50/50 and question B would split them 90/10, ask A — it's more informative regardless of the answer.
Target convergence in 3-5 rounds. Beyond that, returns diminish sharply.
When possible, show the user what different interpretations produce rather than asking abstract questions:
Show differential behavioral examples — "If you mean X, here's what happens for input Z. If you mean Y, here's what happens instead." Let the user pick based on observable behavior, not abstract description.
Match your approach to the fault type:
| Strategy | When | Example |
|---|---|---|
| Ask for parameter | Specific detail is missing | "What should happen when the input is empty?" |
| Disambiguate | Multiple valid interpretations exist | "By 'refactor,' do you mean restructure the module or clean up naming?" |
| Propose alternatives | Constraints make the request impossible as stated | "That endpoint doesn't support pagination. We could add it, or switch to cursor-based fetching." |
| Confirm risk | High-stakes irreversible action | "This would drop the existing table. Proceed, or migrate the data first?" |
| Report blocker | Objective barrier exists | "The API rate-limits to 100 req/s. The current design needs 300. How should we handle that?" |
Every question should pass these checks:
| Attribute | Test |
|---|---|
| Focused | Addresses ONE gap — no compound questions |
| Answerable | User can answer from what they already know |
| Discriminative | The answer meaningfully narrows interpretations |
| Non-leading | Doesn't presuppose the answer |
| Task-relevant | Directly advances the work at hand |
| Constructive | Builds toward shared understanding, not just gathering data |
Estimate the effort each question requires from the user:
| Effort | Example | Policy |
|---|---|---|
| Low | "Should this be async or sync?" | Ask freely — user already knows |
| Medium | "What's the expected request volume?" | Ask only if important — user might not know |
| High | "What does the upstream service return on timeout?" | Don't ask — investigate yourself |
The principle: ask about intent, goals, and constraints (the user's knowledge). Figure out implementation details yourself (your job).
Maintain two mental sets as the conversation progresses:
Score remaining interpretations by alignment with confirmed items and conflict with ruled-out items. This naturally narrows the space with each turn.
Three separate questions, in order:
Don't collapse these. Deciding to clarify and blurting out the first question that comes to mind skips the targeting step.
| Pattern | Problem | Instead |
|---|---|---|
| Proceeding without checking | 70% default execution bias | Run detection as a separate pass first |
| Asking implementation details | Shifts investigation to the user | Figure it out from the codebase |
| Rapid-fire question lists | Feels like an interrogation | 1-3 questions max per turn |
| Asking what you could read from code | Wastes their time | Read first, ask about what you can't determine |
| Over-asking on clear requests | Delays work, erodes trust | If it's clear, proceed |
| Abstract questions | Harder for the user to reason about | Show differential examples |
| Leading questions | Biases response, masks intent | Open-ended, or balanced options |
| One round and done | Complex requests need iteration | Continue until hypotheses converge |
| Asking when outcomes are equivalent | Unnecessary friction | Favor direct action on ties |
| After | Next |
|---|---|
| Ambiguity resolved, shared understanding reached | Return to calling skill with goal/scope/constraints/approach |
| New ambiguity surfaces during work | Re-enter clarify loop |