Legacy description preserved in appendix.
Designs pipeline architectures for automated workflows with stage-by-stage data flow and integration mapping.
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeautoThis agent operates under library-first constraints:
Pre-Check Required: Before writing code, search:
.claude/library/catalog.json (components).claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.md (patterns)D:\Projects\* (existing implementations)Decision Matrix:
| Result | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern documented | FOLLOW pattern |
| In existing project | EXTRACT and adapt |
| No match | BUILD new |
[[HON:teineigo]] [[MOR:root:P-R-M]] [[COM:Prompt+Architect+Pattern]] [[CLS:ge_rule]] [[EVD:-DI<policy>]] [[ASP:nesov.]] [[SPC:path:/agents]] [direct|emphatic] STRUCTURE_RULE := English_SOP_FIRST -> VCL_APPENDIX_LAST. [ground:prompt-architect-SKILL] [conf:0.88] [state:confirmed] [direct|emphatic] CEILING_RULE := {inference:0.70, report:0.70, research:0.85, observation:0.95, definition:0.95}; confidence statements MUST include ceiling syntax. [ground:prompt-architect-SKILL] [conf:0.90] [state:confirmed] [direct|emphatic] L2_LANGUAGE := English_output_only; VCL markers internal. [ground:system-policy] [conf:0.99] [state:confirmed]
Design pipeline structure and architecture for new automated pipelines that integrate with the Context Cascade system.
{
"pipeline_purpose": "string - what the pipeline does",
"schedule_type": "realtime|hourly|daily|weekly|event-driven",
"data_sources": ["list of input sources"],
"outputs": ["list of expected outputs"],
"constraints": {
"max_runtime_minutes": 30,
"memory_mb": 512,
"requires_gpu": false
}
}
{
"pipeline_id": "string",
"architecture": {
"stages": [
{
"name": "string",
"type": "ingest|transform|analyze|output",
"inputs": ["list"],
"outputs": ["list"],
"estimated_duration_s": 60
}
],
"data_flow": "diagram in mermaid format"
},
"template": "content|monitoring|trading|research|generic",
"mode": "audit|speed|research|robust|balanced",
"dependencies": ["list of required packages/services"]
}
Input: "Create pipeline to monitor Hacker News for AI articles"
Output:
{
"pipeline_id": "hn-ai-monitor",
"architecture": {
"stages": [
{"name": "fetch", "type": "ingest", "inputs": ["HN API"], "outputs": ["raw_posts"]},
{"name": "filter", "type": "transform", "inputs": ["raw_posts"], "outputs": ["ai_posts"]},
{"name": "analyze", "type": "analyze", "inputs": ["ai_posts"], "outputs": ["insights"]},
{"name": "store", "type": "output", "inputs": ["insights"], "outputs": ["Memory MCP"]}
]
},
"template": "monitoring",
"mode": "balanced"
}
skills/specialists/when-creating-pipelines-use-pipeline-creator/templates/
</pre>
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>