Pattern for subagents to request user input when AskUserQuestion is unavailable
From sdlcnpx claudepluginhub jwilger/claude-code-plugins --plugin sdlcThis skill uses the workspace's default tool permissions.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Version: 1.0.0 Portability: High
Defines a pattern for subagents (background tasks, delegated agents) to request user input when they cannot directly call AskUserQuestion.
Purpose: Enable long-running or background agents to ask clarifying questions without failing or making assumptions.
Scope:
The Problem: In most agent frameworks, only the main conversation can call AskUserQuestion. Subagents (background tasks, delegated agents) attempting to call it will fail.
Why this matters: Long-running agents often hit decision points requiring user input. Without a pattern, they must either guess (bad) or fail (wasteful).
How to apply:
Example:
# Agent framework limitation
Main Conversation: CAN call AskUserQuestion ✓
Subagent (background): CANNOT call AskUserQuestion ✗
# Solution
Subagent pauses → Main conversation asks → Subagent resumes
The Principle: Before requesting input, save all necessary context so work can resume seamlessly.
Why this matters: Session restarts lose conversation history. State preservation ensures no wasted work redoing analysis.
How to apply:
Example (Framework-Agnostic):
State to preserve:
- Task description and progress
- Files created: [auth_test.rs, auth.rs]
- Files analyzed: [user.rs, session.rs]
- Decision needed: "Should Email validation be strict or lenient?"
- Context: "Found 3 different email patterns in existing code"
The Principle: Use a consistent, structured format for questions that includes context, options, and constraints.
Why this matters: Ad-hoc question formats lead to confusion. Structured formats ensure users have enough context to make informed decisions.
How to apply:
Example Format:
{
"context": "Why you're asking this question",
"question": "The actual question text?",
"options": [
{
"label": "Option A",
"description": "What this means and its implications"
},
{
"label": "Option B",
"description": "What this means and its implications"
}
],
"multiSelect": false
}
The Principle: When resuming with the user's answer, retrieve saved state and continue seamlessly without redoing work.
Why this matters: Users expect agents to remember what they were doing. Redoing analysis wastes time and creates frustration.
How to apply:
Example:
# Bad: Restart from scratch
"Let me re-analyze all the code to understand the problem again..."
# Good: Resume cleanly
"You chose Option A (strict email validation). Continuing implementation..."
Rationale: This pattern bridges the gap between subagent limitations and user interaction needs while preserving work and context.
Scenario: Long-running mutation testing agent finds surviving mutants and needs to know whether to create individual tasks.
Approach:
Example (Conceptual):
# Step 1: Agent pauses
Save state:
- Feature: user-authentication
- Progress: Mutation testing complete, 97% score
- Pending decision: 3 surviving mutants found
- Mutant details: [file:line details for each]
# Step 2: Agent signals
Output: PAUSED_FOR_INPUT
Question: "Found 3 surviving mutants. Create individual fix tasks?"
Options: ["Yes - create tasks", "No - just report"]
# Step 3: Main conversation intermediates
(Main conversation detects pause, calls AskUserQuestion)
User answer: "Yes - create tasks"
# Step 4: Agent resumes
Retrieve state → Read mutant details → Create 3 tasks → Complete
Scenario: Domain modeling agent finds conflicting patterns in existing code and needs business rule clarification.
Approach:
Example:
State:
- Analyzing: Email validation logic
- Found: Two patterns
- Pattern A: Strict RFC 5322 compliance (auth module)
- Pattern B: Lenient validation (signup module)
- Decision: Which pattern should be standard?
Question: "Found two email validation approaches. Which should be canonical?"
Options:
- "Strict RFC 5322 (more secure, may reject valid emails)"
- "Lenient (more permissive, may accept invalid emails)"
- "Keep both (context-dependent)"
Scenario: Code review agent finds architectural inconsistency and needs to know preferred approach.
Approach:
Works well with:
Prerequisites:
Problem: Subagent attempts direct tool call and fails
Solution: Accept the limitation. Use pause-and-resume pattern instead.
Problem: Agent pauses but doesn't save context. On resumption, starts over.
Solution: Always save comprehensive state before signaling pause. Test resumption to verify no work is lost.
Problem: "What should I do about this?" with no context or choices
Solution: Provide specific context, clear options with descriptions, and enough information for user to decide.
Problem: Agent assumes user will answer in specific format, breaks when they don't
Solution: Provide structured options. If using free text, validate and clarify if needed.
Context: Agent using Memento MCP server for state preservation
Step 1: Create Checkpoint
// Save state to Memento
await mcp__memento__create_entities({
entities: [{
name: "mutation-agent Checkpoint 2026-02-04T10:30:00Z",
entityType: "agent_checkpoint",
observations: [
"Agent: mutation-agent | Task: mutation testing for auth module",
"Progress: Testing complete, 97% mutation score",
"Files analyzed: src/auth.rs, tests/auth_test.rs",
"Next step: Handle 3 surviving mutants",
"Pending decision: Create individual tasks or just report?"
]
}]
});
Step 2: Signal Pause
AWAITING_USER_INPUT
{
"context": "Mutation testing found 3 surviving mutants (97% score overall)",
"checkpoint": "mutation-agent Checkpoint 2026-02-04T10:30:00Z",
"questions": [{
"id": "q1",
"question": "Should I create individual fix tasks for each surviving mutant?",
"header": "Mutants",
"options": [
{"label": "Yes - create tasks", "description": "Create 3 separate tasks to address each mutant"},
{"label": "No - just report", "description": "Document the mutants but don't create tasks"}
],
"multiSelect": false
}]
}
Step 3: Main Conversation Intermediates
// Main conversation detects AWAITING_USER_INPUT
// Calls AskUserQuestion with the provided structure
// Receives answer: "Yes - create tasks"
Step 4: Resume Agent
// Main conversation resumes agent
await Task({
subagent_type: "mutation",
resume: agentId,
prompt: `USER_INPUT_RESPONSE
{"q1": "Yes - create tasks"}
Continue from checkpoint: mutation-agent Checkpoint 2026-02-04T10:30:00Z`
});
// Agent retrieves checkpoint and continues
const checkpoint = await mcp__memento__open_nodes({
node_ids: ["mutation-agent Checkpoint 2026-02-04T10:30:00Z"]
});
// Create 3 tasks for the surviving mutants...
Context: Agent using Claude Code task metadata for state
Step 1: Update Task Metadata
// Save state to task metadata
await TaskUpdate({
taskId: currentTaskId,
metadata: {
...existingMetadata,
agent_id: myAgentId,
needs_input: true,
question: "Found two validation approaches. Which should be standard?",
question_context: {
pattern_a: "Strict RFC 5322 (auth module)",
pattern_b: "Lenient validation (signup module)",
files_analyzed: ["src/auth/mod.rs", "src/signup/mod.rs"]
},
question_options: [
"Strict RFC 5322 (Recommended for security)",
"Lenient validation",
"Keep both (context-dependent)"
]
}
});
// Exit agent
return "Paused - awaiting user decision on validation approach";
Step 2: Main Conversation Detects Pause
// Main orchestrator polls tasks
const task = await TaskGet(taskId);
if (task.metadata.needs_input) {
// Ask user
const answer = await AskUserQuestion({
questions: [{
question: task.metadata.question,
header: "Validation",
options: task.metadata.question_options.map(opt => ({
label: opt,
description: ""
})),
multiSelect: false
}]
});
// Update task with answer
await TaskUpdate({
taskId: task.id,
metadata: {
...task.metadata,
needs_input: false,
user_answer: answer,
answered_at: new Date().toISOString()
}
});
// Resume agent
await Task({
subagent_type: "domain",
resume: task.metadata.agent_id,
prompt: `User chose: "${answer}". Continue from where you left off.`
});
}
Step 3: Agent Resumes
// Agent resumes with FULL previous context
const task = await TaskGet(myTaskId);
const answer = task.metadata.user_answer;
// Continue based on answer
if (answer === "Strict RFC 5322 (Recommended for security)") {
// Apply strict validation standard...
}
Context: Agent without MCP or tasks, using filesystem
Step 1: Write State File
# Save state to file
cat > /tmp/agent-pause-state.json << 'EOF'
{
"agent": "domain-agent",
"timestamp": "2026-02-04T10:30:00Z",
"progress": "Analyzed email validation patterns",
"files": ["src/auth/mod.rs", "src/signup/mod.rs"],
"decision": "Which validation pattern?",
"question": {
"text": "Found two email validation approaches. Which should be standard?",
"options": ["Strict RFC 5322", "Lenient", "Keep both"]
}
}
EOF
Step 2: Output Pause Signal
PAUSED_FOR_INPUT: /tmp/agent-pause-state.json
Question: Found two email validation approaches. Which should be standard?
A) Strict RFC 5322 (more secure)
B) Lenient (more permissive)
C) Keep both (context-dependent)
Step 3: Resume with Answer
# Read state
state=$(cat /tmp/agent-pause-state.json)
# Read answer (provided by main conversation)
answer="A"
# Continue work based on answer...
Use this checklist to verify you're applying this pattern correctly:
Source Documentation:
Related Skills:
External Resources:
Extraction Source: sdlc/commands/shared/user-input-protocol.md Extraction Date: 2026-02-04 Last Updated: 2026-02-04 Compatibility: Claude Code 2.1+, Cursor, Windsurf, Cline (with agent resumption support) License: MIT