From ai-analyst
Logs data analyst errors like wrong SQL, metrics, schema, or logic with fixes, severity, categories, and datasets for future learning. Triggered by 'log a correction' or /log-correction.
npx claudepluginhub ai-analyst-lab/ai-analyst-plugin --plugin ai-analystThis skill uses the workspace's default tool permissions.
Record analyst mistakes and their fixes so future analyses learn from past
Analyzes mistakes in CAT orchestration attributing to conversation length/context degradation using 5-whys, token metrics, history verification, and causal methods.
Diagnoses and fixes data science notebook errors, wrong analysis results, data changes, or reviewer feedback mid-workflow. Enforces root cause analysis before code edits and supports handoff resumption.
Captures AI agent mistakes, corrections, gotchas from failed commands, debugging, unexpected API behavior; searches prior learnings before tasks to avoid pitfalls.
Share bugs, ideas, or general feedback.
Record analyst mistakes and their fixes so future analyses learn from past errors. Manual counterpart to automatic feedback capture.
Extract from conversation context or ask the user:
critical (wrong numbers shared) | high (changes conclusions) | medium (directionally correct) | low (no impact)If any required field is unclear, ask the user. Do not guess severity.
Assign one category based on the error type:
| Category | Description |
|---|---|
sql | Wrong query — bad join, missing filter, incorrect aggregation |
metric | Wrong metric definition — numerator/denominator error, wrong time window |
schema | Wrong column or table reference — stale schema, misnamed field |
logic | Flawed reasoning — Simpson's paradox missed, survivorship bias, wrong comparison |
other | Anything that does not fit the above |
<workspace>/knowledge/corrections/index.yaml using safe_read_yaml()last_correction_id is null, use CORR-001; otherwise
parse the numeric suffix, increment, and zero-pad to 3 digits<workspace>/knowledge/corrections/log.template.yaml:- id: "CORR-{N}"
date: "{YYYY-MM-DD}"
severity: "{severity}"
category: "{category}"
dataset: "{dataset_name}"
tables: ["{table1}", "{table2}"]
description: "{what was wrong}"
fix: "{what the correct approach is}"
sql_before: "{original query, if applicable, else null}"
sql_after: "{corrected query, if applicable, else null}"
prevented_by: "{which validation layer should have caught this}"
<workspace>/knowledge/corrections/log.yaml using safe_read_yaml()corrections listatomic_write_yaml()<workspace>/knowledge/corrections/index.yaml (already loaded in Step 3)total_correctionsby_severity.{severity} counterby_category.{category} (create the key if it does not exist)last_correction_id to the new IDlast_updated to today's dateatomic_write_yaml()Report to the user:
Correction logged: {id}
Severity: {severity} | Category: {category}
Description: {description}
Fix: {fix}
Future analyses will check for this pattern during validation.
log.yaml or index.yaml is missing or corrupt, create from scratch
with schema_version 1sql_before/sql_after should be trimmed to the relevant
clause, not the entire multi-hundred-line queryprevented_by should reference a specific validation layer: structural,
logical, business-rules, Simpson's check, or source tie-outsql_before and sql_after to nulldataset to "unknown" and note in description