Legacy description preserved in appendix.
Configures telemetry storage with WHO/WHEN/PROJECT/WHY tagging for pipeline integration.
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeautoThis agent operates under library-first constraints:
Pre-Check Required: Before writing code, search:
.claude/library/catalog.json (components).claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.md (patterns)D:\Projects\* (existing implementations)Decision Matrix:
| Result | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern documented | FOLLOW pattern |
| In existing project | EXTRACT and adapt |
| No match | BUILD new |
[[HON:teineigo]] [[MOR:root:P-R-M]] [[COM:Prompt+Architect+Pattern]] [[CLS:ge_rule]] [[EVD:-DI<policy>]] [[ASP:nesov.]] [[SPC:path:/agents]] [direct|emphatic] STRUCTURE_RULE := English_SOP_FIRST -> VCL_APPENDIX_LAST. [ground:prompt-architect-SKILL] [conf:0.88] [state:confirmed] [direct|emphatic] CEILING_RULE := {inference:0.70, report:0.70, research:0.85, observation:0.95, definition:0.95}; confidence statements MUST include ceiling syntax. [ground:prompt-architect-SKILL] [conf:0.90] [state:confirmed] [direct|emphatic] L2_LANGUAGE := English_output_only; VCL markers internal. [ground:system-policy] [conf:0.99] [state:confirmed]
Configure Memory MCP telemetry storage and WHO/WHEN/PROJECT/WHY tagging for pipeline integration.
C:\Users\17175\.claude\memory-mcp-data\
telemetry\
executions\
{YYYY-MM-DD}\
{task_id}.json <- Individual execution records
meta-loop\
named_modes.json <- Optimized mode configurations
outcomes_{date}.jsonl <- MOO optimization outcomes
session_{id}.json <- Session summaries
pipelines\
{pipeline_id}\
config.yaml <- Pipeline configuration
runs\
{run_id}.json <- Individual run records
Format: {agent_name}-via-{pipeline_id}
Examples:
gemini-via-content-pipelineclaude-via-hackathon-scannercouncil-via-zeitgeist-analysisFormat: ISO8601 with timezone
Example: 2025-12-29T12:00:00-05:00
Format: Pipeline ID (kebab-case) Examples:
content-pipelinerunway-dashboardhackathon-scannerFormat: Task type classification Options:
analysis - Analyzing data/contentsynthesis - Combining multiple sourcesgeneration - Creating new contentaudit - Checking/validatingmonitoring - Ongoing observationtransformation - Converting formats{
"task_id": "exec-{uuid4}",
"timestamp": "{ISO8601}",
"pipeline_id": "{pipeline_id}",
"run_id": "{run_uuid}",
"model_name": "{model}",
"task_type": "{task_type}",
"config_vector": {
"evidential_frame": 0.85,
"aspectual_frame": 0.75,
"morphological_frame": 0.60,
"compositional_frame": 0.70,
"honorific_frame": 0.50,
"classifier_frame": 0.55,
"spatial_frame": 0.40,
"verix_strictness": 1,
"compression_level": 1,
"require_ground": 0.80
},
"frame_scores": {
"evidential": 0.8,
"aspectual": 0.6,
"morphological": 0.5,
"compositional": 0.7,
"honorific": 0.4,
"classifier": 0.5,
"spatial": 0.3
},
"verix_compliance_score": 0.75,
"latency_ms": 1234,
"input_tokens": 500,
"output_tokens": 200,
"task_success": true,
"error": null,
"metadata": {
"WHO": "{agent}-via-{pipeline}",
"WHEN": "{ISO8601}",
"PROJECT": "{pipeline_id}",
"WHY": "{task_type}"
}
}
import uuid
from datetime import datetime
from pathlib import Path
import json
TELEMETRY_BASE = Path("C:/Users/17175/.claude/memory-mcp-data/telemetry")
PIPELINE_ID = "{pipeline_id}"
def get_telemetry_path() -> Path:
"""Get today's telemetry directory."""
today = datetime.now().strftime("%Y-%m-%d")
path = TELEMETRY_BASE / "executions" / today
path.mkdir(parents=True, exist_ok=True)
return path
def create_telemetry_record(
model_name: str,
task_type: str,
config_vector: dict,
frame_scores: dict,
verix_score: float,
latency_ms: int,
success: bool,
error: str = None
) -> dict:
"""Create a telemetry record with proper tagging."""
task_id = f"exec-{uuid.uuid4()}"
now = datetime.now()
record = {
"task_id": task_id,
"timestamp": now.isoformat(),
"pipeline_id": PIPELINE_ID,
"run_id": RUN_ID, # Set at pipeline start
"model_name": model_name,
"task_type": task_type,
"config_vector": config_vector,
"frame_scores": frame_scores,
"verix_compliance_score": verix_score,
"latency_ms": latency_ms,
"task_success": success,
"error": error,
"metadata": {
"WHO": f"{model_name}-via-{PIPELINE_ID}",
"WHEN": now.isoformat(),
"PROJECT": PIPELINE_ID,
"WHY": task_type
}
}
# Save to disk
path = get_telemetry_path() / f"{task_id}.json"
path.write_text(json.dumps(record, indent=2), encoding='utf-8')
return record
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>