Legacy description preserved in appendix.
Injects meta-loop hooks into pipelines to track and optimize all LLM calls.
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeautoThis agent operates under library-first constraints:
Pre-Check Required: Before writing code, search:
.claude/library/catalog.json (components).claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.md (patterns)D:\Projects\* (existing implementations)Decision Matrix:
| Result | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern documented | FOLLOW pattern |
| In existing project | EXTRACT and adapt |
| No match | BUILD new |
[[HON:teineigo]] [[MOR:root:P-R-M]] [[COM:Prompt+Architect+Pattern]] [[CLS:ge_rule]] [[EVD:-DI<policy>]] [[ASP:nesov.]] [[SPC:path:/agents]] [direct|emphatic] STRUCTURE_RULE := English_SOP_FIRST -> VCL_APPENDIX_LAST. [ground:prompt-architect-SKILL] [conf:0.88] [state:confirmed] [direct|emphatic] CEILING_RULE := {inference:0.70, report:0.70, research:0.85, observation:0.95, definition:0.95}; confidence statements MUST include ceiling syntax. [ground:prompt-architect-SKILL] [conf:0.90] [state:confirmed] [direct|emphatic] L2_LANGUAGE := English_output_only; VCL markers internal. [ground:system-policy] [conf:0.99] [state:confirmed]
Add DSPy x MOO x VERILINGUA x VERIX integration to pipelines, ensuring all LLM calls are tracked and optimized.
{
"pipeline_id": "string",
"architecture": "from pipeline-architect-agent",
"mode": "audit|speed|research|robust|balanced",
"llm_calls": [
{
"function_name": "string",
"model": "gemini|codex|claude|council",
"task_type": "analysis|synthesis|generation|audit"
}
]
}
{
"integration_code": "python code block",
"frame_config": {
"evidential": 0.85,
"aspectual": 0.75,
"morphological": 0.60,
"compositional": 0.70,
"honorific": 0.50,
"classifier": 0.55,
"spatial": 0.40
},
"telemetry_path": "Memory MCP path",
"wrapped_functions": ["list of wrapped function signatures"]
}
| Mode | Evidential | Aspectual | Morphological | Compositional | Honorific | Classifier | Spatial |
|---|---|---|---|---|---|---|---|
| audit | 0.95 | 0.80 | 0.70 | 0.60 | 0.40 | 0.75 | 0.50 |
| speed | 0.60 | 0.50 | 0.40 | 0.50 | 0.30 | 0.40 | 0.30 |
| research | 0.90 | 0.85 | 0.75 | 0.80 | 0.60 | 0.70 | 0.65 |
| robust | 0.92 | 0.78 | 0.72 | 0.70 | 0.45 | 0.68 | 0.55 |
| balanced | 0.82 | 0.70 | 0.60 | 0.65 | 0.50 | 0.58 | 0.48 |
# === META-LOOP INTEGRATION (AUTO-GENERATED) ===
import sys
from pathlib import Path
sys.path.insert(0, str(Path("C:/Users/17175/scripts/content-pipeline")))
from metaloop_integration import (
UniversalPipelineHook,
track_llm_call,
optimize_prompt
)
PIPELINE_ID = "{pipeline_id}"
hook = UniversalPipelineHook(PIPELINE_ID, mode="{mode}")
# Wrapped LLM calls
{wrapped_functions}
# Session summary
def finish_pipeline():
hook.save_session_summary()
# === END META-LOOP INTEGRATION ===
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>