Interactive optimization for prompts, code, queries, and more using research-backed techniques and web best practices
Optimizes prompts, code, queries, and more using research-backed techniques with interactive step-by-step approval.
/plugin marketplace add v1truv1us/ai-eng-system/plugin install ai-eng-system@ai-eng-marketplaceInteractive optimization tool that enhances content using research-backed techniques, web-researched best practices, and iterative refinement based on user feedback.
Take a deep breath and approach this optimization systematically. Analyze content, apply appropriate techniques, and iteratively refine based on feedback.
Optimization quality directly impacts user satisfaction and content effectiveness. Poor optimization can make content worse, introduce errors, or waste user's time. This optimization task is critical for enhancing user productivity and improving content quality across all types.
I bet you can't balance aggressiveness with preservation when optimizing content. The challenge is enhancing effectiveness while maintaining accuracy, clarity, and user intent. Success means optimization provides measurable improvement without introducing errors or losing essential meaning.
This command works alongside the automatic prompt optimization system:
! prefix to skip)/ai-eng/optimize --prompt for explicit optimizationThe automatic system runs via:
prompt-optimize tool (call when needed)When optimizing prompts, the system provides an interactive approval workflow. See docs/optimize-approval-flow.md for complete details on:
/ai-eng/optimize --help # Show help and available types
/ai-eng/optimize <content-or-file> # Auto-detect type and optimize
/ai-eng/optimize <content> --prompt # Optimize AI prompts with step-by-step approval
/ai-eng/optimize <content> --query # Enhance database/search queries
/ai-eng/optimize <content> --code # Improve code quality
/ai-eng/optimize <content> --commit # Optimize commit messages
/ai-eng/optimize <content> --docs # Enhance documentation
/ai-eng/optimize <content> --email # Improve communication
/ai-eng/optimize <content> --type=<type> # Explicit type flag
When --help is passed, display:
OPTIMIZE COMMAND - Enhance content using research-backed techniques
USAGE:
/optimize <content-or-file> [--type] Auto-detect and optimize content
/optimize <content> --prompt Optimize AI prompts (with step-by-step approval)
/optimize <content> --query Enhance database/search queries
/optimize <content> --code Improve code quality
/optimize <content> --commit Optimize git commit messages
/optimize <content> --docs Enhance documentation
/optimize <content> --email Improve communication
TYPES:
--prompt AI prompts: structure, personas, reasoning chains (interactive approval)
--query Database/search: indexes, execution plans, caching
--code Source code: performance, readability, error handling
--commit Git messages: clarity, conventional format
--docs Documentation: structure, examples, clarity
--email Communication: tone, clarity, call-to-action
PROMPT OPTIMIZATION FLOW (--prompt mode):
1. Analyze prompt and detect domain/complexity
2. Show optimization plan with steps
3. Guide through interactive approval:
- Approve all steps
- Approve specific steps
- Modify step content
- Edit final prompt directly
- Cancel and use original
4. Calculate expected improvement
5. Execute final prompt
OPTIONS:
-m, --mode Approach: conservative | moderate | aggressive
-p, --preview Show changes without applying
-a, --apply Apply confirmed optimizations
-i, --interactive Enable clarifying questions (non-prompt types)
-s, --source Research sources: anthropic | openai | opencode | all
-v, --verbose Show detailed process
--help Show this help
EXAMPLES:
/optimize "Help me debug auth" --prompt # Step-by-step prompt optimization
/optimize "Help me debug auth" --prompt --verbose # Verbose prompt optimization
/optimize "SELECT * FROM users" --query --preview # Query optimization preview
/optimize src/auth.js --code --apply # Apply code optimizations
/optimize "fix: resolve login bug" --commit # Optimize commit message
The command maintains backward compatibility:
!: Escape hatch still works for all prompt typesWhen optimization is skipped or simple prompts are detected, flow proceeds normally with original content. OPTIMIZE COMMAND - Enhance content using research-backed techniques
USAGE: /optimize <content-or-file> [--type] Auto-detect and optimize content /optimize <content> --prompt Optimize AI prompts /optimize <content> --query Enhance database/search queries /optimize <content> --code Improve code quality /optimize <content> --commit Optimize commit messages /optimize <content> --docs Enhance documentation /optimize <content> --email Improve communication
TYPES: --prompt AI prompts: structure, personas, reasoning chains --query Database/search: indexes, execution plans, caching --code Source code: performance, readability, error handling --commit Git messages: clarity, conventional format --docs Documentation: structure, examples, clarity --email Communication: tone, clarity, call-to-action
OPTIONS: -m, --mode Approach: conservative | moderate | aggressive -p, --preview Show changes without applying -a, --apply Apply confirmed optimizations -i, --interactive Enable clarifying questions -s, --source Research sources: anthropic | openai | opencode | all -v, --verbose Show detailed process --help Show this help
EXAMPLES: /optimize "Help me debug auth" --prompt --interactive /optimize "SELECT * FROM users" --query --preview /optimize src/auth.js --code --apply /optimize "fix: resolve login bug" --commit
## Types
| Type | Purpose | Examples |
|------|---------|-----------|
| `prompt` | Optimize AI prompts for better responses | User prompts to AI models |
| `query` | Enhance database/search queries | SQL, search, API queries |
| `code` | Improve code quality and performance | Functions, algorithms, scripts |
| `commit` | Optimize git commit messages | Commit text, PR descriptions |
| `docs` | Enhance documentation clarity | README files, API docs |
| `email` | Improve communication effectiveness | Professional emails, messages |
## Options
- `-t, --type <type>`: Content type (prompt|query|code|commit|docs|email) [default: auto-detect]
- `-s, --source <sources>`: Research sources (anthropic|openai|opencode|all) [default: all]
- `-i, --interactive`: Enable interactive refinement with questions
- `-p, --preview`: Show optimization preview before applying
- `-a, --apply`: Apply confirmed optimizations
- `-o, --output <file>`: Save optimized content to file
- `-f, --force`: Apply optimizations without confirmation
- `-v, --verbose`: Show detailed research and optimization process
- `--questions`: Ask clarifying questions before optimization
- `-m, --mode <mode>`: Optimization approach (conservative|moderate|aggressive) [default: moderate]
## Process
### Phase 1: Content Analysis
1. **Type Detection**: Auto-detect content type if not specified
2. **Context Assessment**: Analyze content's purpose, audience, and constraints
3. **Quality Evaluation**: Identify areas for improvement (clarity, performance, effectiveness)
4. **Research Planning**: Determine best sources and techniques to apply
### Phase 1.5: Prompt Refinement (for --prompt mode)
When `--prompt` flag is used:
**Primary Workflow: Step-by-Step Approval**
Use the `prompt-optimize` tool with interactive approval flow:
1. **Call prompt-optimize tool** with user's prompt
2. **Parse JSON response** to extract optimization plan
3. **Display optimization plan** with:
- Detected domain and complexity
- Each optimization step (title and content)
- Expected improvement metrics
4. **Guide user through approval**:
- Present interactive menu with 5 options
- Handle user choice (approve all, approve specific, modify, edit final, cancel)
- Rebuild prompt based on selections
5. **Return final prompt** for execution
**Integration with prompt-refinement Skill** (optional enhancement):
Use skill: `prompt-refinement`
Phase: [auto-detect or specified]
The prompt-refinement skill provides:
- TCRO structuring (Task, Context, Requirements, Output)
- Phase-specific clarifying questions (research, specify, plan, work)
- Integration with incentive-prompting techniques
**Enhanced Workflow (when combining both):**
1. Call prompt-optimize tool
2. Display optimization plan
3. If phase unclear, invoke prompt-refinement skill
4. Apply phase-specific template (research, specify, plan, or work)
5. Structure prompt into TCRO format
6. Present refined prompt with step-by-step approval
7. Guide user through approval menu
8. Execute final approved prompt
**Example:**
```bash
# User provides vague prompt
/ai-eng/optimize "help me debug auth" --prompt
# System:
1. Calls prompt-optimize tool
2. Displays optimization plan:
📋 Optimization Plan (medium, security)
Step 1: Expert Persona
Step 2: Step-by-Step Reasoning
Step 3: Stakes Language
Step 4: Self-Evaluation
Expected improvement: +60-115% quality
3. Presents options menu
4. User selects option 1 (approve all)
5. Returns optimized prompt for execution
Based on type and sources, research:
Based on content type and context:
For Prompts:
For Code:
For Queries:
For Documentation:
## Optimization Proposal
### Analysis:
- **Type**: SQL Query Optimization
- **Issues**: Missing indexes, inefficient JOIN, no LIMIT clause
- **Performance Impact**: High (millions of rows)
### Proposed Changes:
1. **Add composite index** on (user_id, status, created_at)
2. **Refactor JOIN** to use indexed columns first
3. **Add pagination** with LIMIT and OFFSET for large results
4. **Add query monitoring** for performance tracking
### Research Sources:
- PostgreSQL Documentation: Query Planning and Optimization
- Database Performance Blog: Index Best Practices
- OpenSource Community Solutions: Similar query patterns
### Questions for You:
1. **Index Size**: What's the approximate table size (rows, growth rate)?
2. **Write Frequency**: How often are INSERTs/UPDATEs vs SELECTs?
3. **Consistency Requirements**: Can we accept slightly stale data for performance?
Do you want to proceed with these optimizations? (y/n/suggest modifications)
Based on research and feedback:
For Prompts:
For Code:
For Queries:
For Documentation:
User: /ai-eng/optimize "help me design authentication" --prompt
## Optimization Plan (medium, security)
Step 1: Expert Persona
You are a senior security engineer with 15+ years of authentication experience.
Step 2: Step-by-Step Reasoning
Take a deep breath and analyze this step by step.
Step 3: Stakes Language
This is important for the project's success. A thorough, complete solution is essential.
Step 4: Self-Evaluation
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Expected improvement: +60-115% quality (based on research-backed techniques)
- Expert Persona: +60% (Kong et al., 2023)
- Step-by-Step Reasoning: +46% (Yang et al., 2023)
- Stakes Language: +45% (Bsharat et al., 2023)
- Self-Evaluation: +10% calibration
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 1
✓ Using optimized prompt with 4 steps applied
[Proceeding to execute optimized prompt...]
User: /ai-eng/optimize "optimize database query" --prompt
## Optimization Plan (complex, database)
Step 1: Expert Persona
You are a senior database architect with 15+ years of PostgreSQL experience.
Step 2: Step-by-Step Reasoning
Take a deep breath and analyze this step by step.
Step 3: Stakes Language
This is important for the project's success. A thorough, complete solution is essential.
Step 4: Self-Evaluation
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Expected improvement: +60-115% quality
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 2
Which step(s) would you like to approve? (Enter step IDs, e.g., "1,2,4"): 1,2,4
✓ Using optimized prompt with 3 steps applied (skipped: Step 3 - Stakes Language)
[Proceeding to execute optimized prompt...]
User: /ai-eng/optimize "help with frontend refactoring" --prompt
## Optimization Plan (medium, frontend)
Step 1: Expert Persona
You are a senior frontend architect with 12+ years of React/Vue experience.
Step 2: Step-by-Step Reasoning
Take a deep breath and analyze this step by step.
Step 3: Stakes Language
This is important for the project's success. A thorough, complete solution is essential.
Step 4: Self-Evaluation
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Expected improvement: +60-115% quality
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 3
Which step would you like to modify? (Enter step ID: 1-4): 1
Step 1: Expert Persona
Current content:
You are a senior frontend architect with 12+ years of React/Vue experience.
New content:
You are a senior frontend engineer with 10+ years of React and TypeScript experience, specializing in performance optimization and accessibility.
✓ Step 1 modified
✓ Using optimized prompt with updated steps
[Proceeding to execute optimized prompt...]
User: /ai-eng/optimize "design API endpoints" --prompt
## Optimization Plan (medium, backend)
Step 1: Expert Persona
You are a senior backend engineer with 15+ years of distributed systems experience.
Step 2: Step-by-Step Reasoning
Take a deep breath and analyze this step by step.
Step 3: Stakes Language
This is important for the project's success. A thorough, complete solution is essential.
Step 4: Self-Evaluation
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Expected improvement: +60-115% quality
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 4
Current optimized prompt:
You are a senior backend engineer with 15+ years of distributed systems experience.
Take a deep breath and analyze this step by step.
This is important for the project's success. A thorough, complete solution is essential.
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Task: design API endpoints
Edit this prompt:
You are a senior backend engineer with 15+ years of distributed systems experience, specializing in REST API design and microservices.
Take a deep breath and analyze this step by step.
This API is critical for our application's performance and scalability.
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Task: design RESTful API endpoints for user authentication and authorization
✓ Using your edited prompt
[Proceeding to execute your edited prompt...]
User: /ai-eng/optimize "hello world" --prompt
ℹ️ Simple prompt - optimization not beneficial
[Proceeding with original prompt...]
User: /ai-eng/optimize "!just run the tests" --prompt
ℹ️ Optimization skipped: User requested bypass with ! prefix
[Proceeding with original prompt: just run the tests]
User: /optimize "Help me fix my authentication" --prompt --interactive
## Optimization Preview
### Analysis:
- Current Prompt: Basic request without structure
- Target Model: Claude 3.5 Sonnet
- Missing Elements: Context, error scenarios, expected output format
### Research-Based Enhancements:
1. **Expert Persona**: "You are a senior security engineer with 10+ years..."
2. **Stakes Language**: "This authentication system is critical to production security..."
3. **Step-by-Step Reasoning**: "Take a deep breath and analyze systematically..."
4. **Self-Evaluation**: "Rate your confidence 0-1 and explain reasoning..."
### Questions:
1. What authentication methods are you using? (JWT, OAuth, session-based?)
2. Are there specific error messages you're seeing?
3. What's the tech stack (React/Node, Django/Python)?
### Interactive Refinement:
Based on your responses, tailored optimization applied...
**Enhanced Prompt Ready for Application**
User: /optimize "SELECT * FROM large_table WHERE category = 'active'" --query --preview
## Query Optimization Preview
### Analysis:
- **Query**: Full table scan with category filter
- **Table Size**: ~10M rows, growing at 50k/day
- **Performance Issues**: No index on category, full table scan
### Proposed Optimizations:
1. **Add Index**: CREATE INDEX idx_large_table_category ON large_table(category)
2. **Partial Results**: Add LIMIT clause with pagination for large result sets
3. **Query Rewrite**: Use covering index for better performance
### Expected Impact:
- **Before**: Full table scan (~500ms avg, 2s peak)
- **After**: Index seek (~5ms avg, 50ms peak)
- **Improvement**: 99% reduction in query time
### Research Sources:
- PostgreSQL Query Planning Guide
- Database Performance Best Practices
- Similar OpenSource Query Patterns
Apply optimizations? (y/n/modify)
User: /optimize "fix function in auth.js" --code --file src/auth.js --apply
## Code Optimization Process
### Analysis:
- **File**: src/auth.js (authentication logic)
- **Issues**: No input validation, synchronous processing, missing error handling
- **Performance Impact**: Medium (blocking I/O operations)
### Applied Optimizations:
✓ Added input validation and sanitization
✓ Implemented async/await patterns for non-blocking operations
✓ Enhanced error handling with specific error types
✓ Added logging for debugging and monitoring
✓ Improved code organization and separation of concerns
### Quality Metrics:
- **Security**: Enhanced with proper validation and error handling
- **Performance**: Non-blocking operations, ~60% faster response times
- **Maintainability**: Better error messages and code structure
- **Reliability**: Comprehensive error recovery paths
The step-by-step approval workflow can be configured:
{
"promptOptimization": {
"enabled": true,
"autoApprove": false,
"verbosity": "normal",
"skipForSimplePrompts": true,
"escapePrefix": "!"
}
}
Settings:
enabled: Enable/disable prompt optimization (default: true)autoApprove: Skip approval menu and apply all steps (default: false)verbosity: Output detail level - quiet|normal|verbose (default: normal)skipForSimplePrompts: Automatically skip simple prompts (default: true)escapePrefix: Prefix to skip optimization (default: "!")Auto-Approve Mode:
When autoApprove is enabled, the system:
Session Commands:
# Toggle auto-approve for current session
/optimize-auto on|off
# Change verbosity for current session
/optimize-verbosity quiet|normal|verbose
Combine insights from multiple authoritative sources:
{
"sources": {
"anthropic": {
"priority": "high",
"focus": ["prompt-structure", "model-specific-optimization"]
},
"openai": {
"priority": "high",
"focus": ["prompt-engineering", "response-quality"]
},
"opencode": {
"priority": "medium",
"focus": ["community-patterns", "practical-examples"]
}
}
}
# Optimize and modify file in place
/optimize --file README.md --type docs --apply
# Optimize and save to new file
/optimize --file config.json --type code --output config-optimized.json
# Optimize git staged files
/optimize --staged --type commit --interactive
/clean: Remove verbosity from optimized content if needed/review: Validate optimization quality and suggest improvements/work: Apply optimizations during implementation tasks/ai-eng/research: Gather context before optimization/ai-eng/specify: Create specifications with optimized prompts# During development
/ai-eng/optimize "implement user authentication" --prompt
# Review optimization plan
# Approve all steps
# Implement based on optimized prompt
/ai-eng/optimize ./src/auth.js --code --preview
# Review and apply code optimizations
/ai-eng/review
# Validate final implementation
# Research phase with optimized prompt
/ai-eng/optimize "How does authentication work in this codebase?" --prompt
# Review optimization plan
# Approve steps
# Research executes with optimized prompt
# Specification phase with optimized prompt
/ai-eng/optimize "Create specification for user authentication" --prompt
# Review optimization plan
# Modify expert persona step
# Approve steps
# Specification generated with optimized prompt
# Planning phase with optimized prompt
/ai-eng/optimize "Plan implementation of authentication system" --prompt
# Review optimization plan
# Approve specific steps (skip stakes language)
# Plan created with approved steps
# Work phase with optimized prompt
/ai-eng/optimize "Implement authentication feature following spec" --prompt
# Review optimization plan
# Edit final prompt for specific constraints
# Implementation executes with customized prompt
--prompt flag is used for explicit optimization! - triggers skip mode/optimize <prompt> --prompt --verbose/optimize-auto on"autoApprove": true! prefix for single-prompt skip--preview mode to review changes--mode conservative--interactive to approve each change--sources--verbose to see conflicting recommendationsSuccessful optimization achieves:
The step-by-step approval workflow adapts to different execution platforms:
Mechanism: Automatic interception via UserPromptSubmit hook
Flow:
Example:
User: help me design authentication
[Hook analyzes prompt, calls prompt-optimize tool]
🧧 Prompt optimized (medium, security)
Step 1: Expert Persona
You are a senior security engineer with 15+ years...
Step 2: Step-by-Step Reasoning
Take a deep breath and analyze this step by step.
[2 more steps...]
Expected improvement: +60-115% quality
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 1
✓ Using optimized prompt with 4 steps applied
[Model executes with optimized prompt]
Mechanism: Explicit tool call by model
Flow:
/ai-eng/optimize commandExample:
User: /ai-eng/optimize "help me design authentication"
[Command calls prompt-optimize tool, gets JSON]
📋 Optimization Plan (medium, security)
Step 1: Expert Persona
You are a senior security engineer with 15+ years...
[2 more steps...]
Expected improvement: +60-115% quality
Options:
1. Approve all steps
2. Approve specific step
3. Modify step
4. Edit final prompt
5. Cancel
User: 1
✓ Using optimized prompt with 4 steps applied
[Final prompt ready for execution]
| Aspect | Claude Code | OpenCode |
|---|---|---|
| Trigger | Automatic (hook) | Manual (command) |
| Tool Call | Hook calls automatically | Command calls explicitly |
| Timing | Before model sees prompt | When command invoked |
| User Control | Escape with ! prefix | Menu options + escape with ! |
| Integration | Seamless with normal usage | Explicit workflow |
| Configuration | .claude/ai-eng-config.json | opencode.json or ai-eng-config.json |
The approval flow works identically in both environments:
For Claude Code Hooks:
.claude/hooks/prompt-optimizer-hook.py! prefixed)For OpenCode Commands:
tool.execute()The optimize command provides interactive, research-driven enhancement of any content type with user collaboration and quality assurance.
┌─────────────────────────────────────┐
│ User invokes /ai-eng/optimize │
└──────────────┬──────────────────────┘
│
v
┌─────────────────────────────────────┐
│ Call prompt-optimize tool │
│ with user's prompt │
└──────────────┬──────────────────────┘
│
v
┌─────────────────────────────────────┐
│ Parse JSON response │
│ - version, originalPrompt │
│ - optimizedPrompt, domain │
│ - complexity, steps, skipped │
└──────────────┬──────────────────────┘
│
v
┌───────┴───────┐
│ Is skipped? │
└───────┬───────┘
Yes │ No
│
v
┌─────────────────────────────────────┐
│ Display skip reason │
│ Use original prompt │
└──────────────┬──────────────────────┘
│
v
[Proceed to execution]
v
┌─────────────────────────────────────┐
│ Display optimization plan │
│ - Domain, complexity │
│ - Steps with titles & content │
│ - Expected improvement │
└──────────────┬──────────────────────┘
│
v
┌─────────────────────────────────────┐
│ Present approval menu │
│ 1. Approve all steps │
│ 2. Approve specific step │
│ 3. Modify step │
│ 4. Edit final prompt │
│ 5. Cancel │
└──────────────┬──────────────────────┘
│
v
┌───────┴───────────────────────┐
│ User selects option │
└───────┬───────────────────────┘
│
v
┌───────┴───────┐
│ Which option? │
└───────┬───────┘
│
┌──────────┼──────────┐
│ │ │
v v v
Opt 1 Opt 2-4 Opt 5
│ │ │
v v v
Apply all Customize Cancel
│ │ │
v v v
┌─────┐ ┌─────┐ ┌─────┐
│Use │ │Use │ │Use │
│opt- │ │mod- │ │orig │
│prompt│ │ified│ │prompt│
└──┬──┘ └──┬──┘ └──┬──┘
│ │ │
└──────────┼──────────┘
│
v
┌─────────────────────────────────────┐
│ Return final prompt │
│ For execution │
└─────────────────────────────────────┘
Input:
"help me design authentication"
Tool Response:
{
"version": 1,
"originalPrompt": "help me design authentication",
"optimizedPrompt": "You are a senior security engineer...\n\nTask: help me design authentication",
"domain": "security",
"complexity": "medium",
"steps": [
{"id": "persona", "title": "Expert Persona", "after": "You are a senior security engineer..."},
{"id": "reasoning", "title": "Step-by-Step Reasoning", "after": "Take a deep breath..."},
{"id": "stakes", "title": "Stakes Language", "after": "This is important..."},
{"id": "selfEval", "title": "Self-Evaluation", "after": "Rate your confidence..."}
],
"skipped": false
}
User Action: Approve all steps (option 1)
Final Output:
✓ Using optimized prompt with 4 steps applied
You are a senior security engineer with 15+ years of authentication experience.
Take a deep breath and analyze this step by step.
This is important for the project's success. A thorough, complete solution is essential.
After providing your solution, rate your confidence 0-1 and identify any assumptions you made.
Task: help me design authentication
The step-by-step approval workflow relies on the prompt-optimize tool from the ai-eng-system. Ensure:
Track approval state across the interactive flow:
interface ApprovalState {
originalPrompt: string;
toolResult: PromptOptimizationResult;
approvedSteps: string[];
modifiedSteps: Map<string, string>;
finalPrompt: string;
currentAction: string;
}
Display expected improvement based on applied steps:
function calculateConfidenceImprovement(steps: string[]): string {
const techniqueImpact: Record<string, number> = {
persona: 60, // Kong et al., 2023
reasoning: 46, // Yang et al., 2023
stakes: 45, // Bsharat et al., 2023
selfEval: 10 // Calibration
};
const total = steps.reduce((sum, stepId) => sum + (techniqueImpact[stepId] || 0), 0);
return `+${total}% quality (based on research-backed techniques)`;
}
Handle various response scenarios:
Validate user selections:
function validateSelection(input: string, stepCount: number): boolean {
const choice = parseInt(input.trim());
return choice >= 1 && choice <= 5;
}
function validateStepIds(input: string, stepCount: number): string[] {
const ids = input.split(',').map(s => s.trim());
return ids.filter(id => parseInt(id) >= 1 && parseInt(id) <= stepCount);
}
Build final prompt from approved steps:
function reconstructPrompt(
steps: PromptOptimizationResult["steps"],
approvedIds: string[],
modifications: Map<string, string>,
originalTask: string
): string {
const parts: string[] = [];
for (const step of steps) {
if (approvedIds.includes(step.id)) {
const content = modifications.get(step.id) || step.after;
if (content) {
parts.push(content);
}
}
}
parts.push(`\n\nTask: ${originalTask}`);
return parts.join("\n\n");
}
After completing optimization, rate your confidence in improvement quality (0.0-1.0). Identify any uncertainties about applied techniques, areas where optimization may have been too aggressive or conservative, or aspects of original content that may have been degraded. Note any feedback patterns or technique effectiveness observations.