Orchestrates evidence-first research workflows for fresh facts, comparisons, enrichments, and recommendations from public sources and user context using ECC skills.
From everything-claude-codenpx claudepluginhub affaan-m/everything-claude-code --plugin everything-claude-codeThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Use this when the user asks to research something current, compare options, enrich people or companies, or turn repeated lookups into a monitored workflow.
This is the operator wrapper around the repo's research stack. It is not a replacement for deep-research, exa-search, or market-research; it tells you when and how to use them together.
Pull these ECC-native skills into the workflow when relevant:
exa-search for fast current-web discoverydeep-research for multi-source synthesis with citationsmarket-research when the end result should be a recommendation or ranked decisionlead-intelligence when the task is people/company targeting instead of generic researchknowledge-ops when the result should be stored in durable context afterwardNormalize any supplied material into:
Do not restart the analysis from zero if the user already built part of the model.
Choose the right lane before searching:
exa-search for fast discoverydeep-research when synthesis or multiple sources mattermarket-research when the outcome should end in a recommendationlead-intelligence when the real ask is target ranking or warm-path discoveryFor important claims, say whether they are:
Freshness-sensitive answers should include concrete dates.
If the user is likely to ask the same research question repeatedly, say so explicitly and recommend a monitoring or workflow layer instead of repeating the same manual search forever.
QUESTION TYPE
- factual / comparison / enrichment / monitoring
EVIDENCE
- sourced facts
- user-provided context
INFERENCE
- what follows from the evidence
RECOMMENDATION
- answer or next move
- whether this should become a monitor