Execute Recursive Meta-Prompting loop with quality convergence
Iteratively refines solutions using a monadic quality loop until convergence. Use this for complex tasks requiring multiple refinement passes with measurable quality thresholds.
/plugin marketplace add manutej/categorical-meta-prompting-plugin/plugin install manutej-categorical-meta-prompting-framework@manutej/categorical-meta-prompting-plugin@quality:[threshold] @max_iterations:[n] @mode:[mode] [task]This command implements Monad M for iterative refinement with quality convergence.
Skill: atomic-blocks (internal)
M = (Prompt, unit, bind)
- unit(p) = MonadPrompt(p, quality=initial)
- bind(ma, f) = f(ma.prompt) with quality tracking
- join(mma) = flatten nested monads
Monad Laws Verified:
unit(a) >>= f = f(a)m >>= unit = m(m >>= f) >>= g = m >>= (λx. f(x) >>= g)This command internally uses the following atomic blocks:
/rmp Block Flow:
┌─────────────────────────────────────────────────────────────┐
│ │
│ Task ─► apply_transform ─► execute_prompt ─► Output │
│ │ │
│ ▼ │
│ assess_quality │
│ │ │
│ ▼ │
│ evaluate_convergence │
│ │ │ │
│ [CONVERGED] [CONTINUE] │
│ │ │ │
│ ▼ ▼ │
│ RETURN extract_improvement │
│ │ │
│ ▼ │
│ apply_refinement │
│ │ │
│ └──► (loop back) │
│ │
└─────────────────────────────────────────────────────────────┘
Blocks Used:
├── apply_transform : (Template, Task) → Prompt
├── execute_prompt : Prompt → Output
├── assess_quality : Output → QualityVector
├── evaluate_convergence: (QualityVector, Threshold) → Status
├── extract_improvement: (Output, QualityVector) → Direction
├── apply_refinement : (Output, Direction) → RefinedOutput
└── aggregate_iterations: [Output] → BestOutput
Override individual blocks with @block: modifier:
# Override quality threshold
/rmp @block:evaluate_convergence.threshold:0.9 "task"
# Override quality weights
/rmp @block:assess_quality.weights:{correctness:0.5,clarity:0.2} "task"
# Override max refinement attempts per iteration
/rmp @block:apply_refinement.max_attempts:2 "task"
/rmp @quality:0.85 @max_iterations:5 @mode:iterative "task description"
| Modifier | Default | Description |
|---|---|---|
@quality: | 0.8 | Target quality threshold (0.0-1.0) |
@max_iterations: | 5 | Maximum refinement iterations |
@mode: | iterative | Execution mode: iterative, dry-run, spec |
@budget: | auto | Token budget for entire RMP loop |
@variance: | 20% | Acceptable budget variance |
| Operator | Meaning | Example |
|---|---|---|
>=> | Kleisli (monadic refinement) | [analyze>=>design>=>implement] |
$ARGUMENTS
If @mode:dry-run was specified, show plan and exit:
RMP_PLAN:
task: [task]
quality_threshold: [threshold]
max_iterations: [n]
categorical_structure: M(Prompt) with unit/bind
estimated_trajectory: [0.5, 0.65, 0.75, 0.82, 0.87]
exit: Plan generated, no execution
If @mode:spec was specified, generate specification and exit:
name: rmp-[task-hash]
type: iterative_refinement
categorical_structure:
monad: M(Prompt) →^n Prompt
unit: initial → MonadPrompt
bind: MonadPrompt → (Prompt → MonadPrompt) → MonadPrompt
enrichment: [0,1]-quality
operators:
- >=> (Kleisli composition)
config:
quality_threshold: [value]
max_iterations: [value]
stages:
- {name: initial, operator: M.unit}
- {name: assess, operator: evaluate → [0,1]}
- {name: refine, operator: M.bind, condition: quality < threshold}
- {name: converge, operator: M.return, condition: quality >= threshold}
Execute an RMP loop using M.bind for structured iterative refinement:
┌─────────────────────────────────────────┐
│ M.bind(iteration_n, refine) │
├─────────────────────────────────────────┤
│ 1. M.unit(prompt) → MonadPrompt │
│ 2. Execute → output with quality │
│ 3. M.assess(output) → quality ∈ [0,1] │
│ 4. If quality >= @quality: M.return │
│ 5. Else: M.bind(refine) → iteration n+1 │
└─────────────────────────────────────────┘
Categorical Semantics:
M.bind(current, improve)For each iteration, assess quality using tensor product semantics:
| Dimension | Weight | Score (0-1) | Notes |
|---|---|---|---|
| Correctness | 40% | ?/1.0 | Does it solve the problem correctly? |
| Clarity | 25% | ?/1.0 | Is it clear and understandable? |
| Completeness | 20% | ?/1.0 | Are edge cases and requirements covered? |
| Efficiency | 15% | ?/1.0 | Is the solution well-designed? |
| Aggregate | 100% | ?/1.0 | Weighted sum for @quality: comparison |
Quality Formula (Enriched [0,1]):
quality = 0.40 × correct + 0.25 × clear + 0.20 × complete + 0.15 × efficient
Attempt: [Generate initial solution to the task]
Quality Assessment (M.assess → [0,1]):
| Dimension | Score | Justification |
|---|---|---|
| Correctness | /1.0 | |
| Clarity | /1.0 | |
| Completeness | /1.0 | |
| Efficiency | /1.0 | |
| Aggregate | /1.0 |
Decision: [CONVERGED if >= threshold] | [CONTINUE if < threshold]
Improvement Direction (for M.bind application): [What specific aspect needs the most improvement - guide next iteration]
Continue iterations, applying M.bind at each step:
M.bind(current, improve) = improve(current.prompt) with quality tracking
Refinement Focus: [Based on lowest dimension from previous iteration]
Attempt: [Generate improved solution focusing on the identified weakness]
Quality Assessment:
| Dimension | Score | Delta | Notes |
|---|---|---|---|
| Correctness | /1.0 | [+/-] | |
| Clarity | /1.0 | [+/-] | |
| Completeness | /1.0 | [+/-] | |
| Efficiency | /1.0 | [+/-] | |
| Aggregate | /1.0 | [+/-] |
Decision: [CONVERGED | CONTINUE | MAX_ITERATIONS | NO_IMPROVEMENT]
RMP_CHECKPOINT_[i]:
iteration: [n]
quality:
correctness: [0-1]
clarity: [0-1]
completeness: [0-1]
efficiency: [0-1]
aggregate: [0-1]
quality_delta: [+/- from previous]
trend: [RAPID_IMPROVEMENT | STEADY_IMPROVEMENT | PLATEAU | DEGRADING]
status: [CONTINUE | CONVERGED | MAX_ITERATIONS | NO_IMPROVEMENT]
budget:
used: [tokens]
remaining: [tokens]
variance: [%]
Apply M.return to extract final value from monad:
╔══════════════════════════════════════════════════════════════╗
║ RMP RESULT ║
╠══════════════════════════════════════════════════════════════╣
║ Task: [original task] ║
║ Iterations: N ║
║ Final Quality: X.XX/1.0 ║
║ Convergence: [ACHIEVED | MAX_ITERATIONS | NO_IMPROVEMENT] ║
║ Categorical Structure: M(Prompt) via M.bind composition ║
╚══════════════════════════════════════════════════════════════╝
Quality Trace (enriched category trajectory):
Iter 1: 0.XX → Iter 2: 0.XX → ... → Final: 0.XX
Solution:
[final refined solution]
Monadic Quality Trace:
M.unit(p₀) →[bind]→ M(p₁, q₁) →[bind]→ M(p₂, q₂) →...→ M.return(pₙ, qₙ)
Quality progression: q₁ → q₂ → ... → qₙ ≥ threshold
# Basic - uses defaults (@quality:0.8, @max_iterations:5)
/rmp "implement binary search"
# Explicit quality threshold
/rmp @quality:0.9 "optimize database query"
# With iteration limit
/rmp @quality:0.85 @max_iterations:3 "build REST API"
# With budget tracking
/rmp @quality:0.85 @budget:15000 @variance:15% "implement caching"
# Kleisli composition chain
/rmp @quality:0.85 [analyze>=>design>=>implement] "build auth system"
# Dry-run preview
/rmp @mode:dry-run @quality:0.9 "complex multi-step feature"
# Generate specification only
/rmp @mode:spec @quality:0.85 "data processing pipeline"
Old syntax still works:
/rmp "task" 0.85 # Positional quality threshold
/rmp "task" # Uses default threshold (0.8)
New unified syntax is preferred:
/rmp @quality:0.85 "task" # Explicit modifier