From ai-analyst
Pairs success metrics with guardrail metrics to check for trade-offs in analysis. Embedded in ask-question and run-analysis skills; do not invoke separately.
npx claudepluginhub ai-analyst-lab/ai-analyst-plugin --plugin ai-analystThis skill uses the workspace's default tool permissions.
This skill's core functionality has been absorbed into **ask-question and run-analysis**.
Generates dev cycle feedback reports: calculates assertiveness scores, analyzes prompt quality, aggregates metrics, root cause analysis on failures, outputs to docs/feedbacks/cycle-{date}/.
Runs 4-layer validation stack on analysis findings. Embedded functionality in ask-question and run-analysis skills; do not invoke separately.
Runs Karpathy-inspired autonomous iteration loops on any task: modify, verify, keep/discard, repeat. Subcommands for planning, debugging, fixing, security audits, shipping.
Share bugs, ideas, or general feedback.
This skill's core functionality has been absorbed into ask-question and run-analysis.
When ask-question and run-analysis triggers, it automatically handles what guardrails used to do.
Do not invoke this skill directly. Use ask-question and run-analysis instead.