From radar
Analyze your Claude Code and Cowork session history to surface actionable workflow insights. Diagnoses failures, identifies automation opportunities, aligns time allocation with goals, and flags repeated knowledge worth saving.
npx claudepluginhub flippyhead/radar --plugin radarThis skill uses the workspace's default tool permissions.
Analyze recent Claude Code and Cowork sessions to generate actionable workflow insights.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Analyze recent Claude Code and Cowork sessions to generate actionable workflow insights.
Uses the bundled workflow-analyzer for session parsing and enrichment. Claude does the reasoning.
$ARGUMENTS — Optional:
--days N — How many days of history to analyze (default: 7)Parse the days value from $ARGUMENTS if provided. Default to 7.
Run the workflow-analyzer CLI to parse and enrich sessions from all configured sources (Claude Code + Cowork):
# Bundled binary (preferred)
node "${CLAUDE_PLUGIN_ROOT}/bin/workflow-analyzer/dist/cli.js" parse --since ${DAYS} --output /tmp/workflow-analyzer-parsed.json
# Fallback if bin/ not available:
# npx @flippyhead/workflow-analyzer@latest parse --since ${DAYS} --output /tmp/workflow-analyzer-parsed.json
If the npx command fails, surface the error output directly to the user. Do not swallow the error or exit silently.
Read the output file. It contains { sessions: [...], sessionGroups: [...] }.
If sessions is empty, report "No activity in the last N days" and stop.
Otherwise, note the summary: how many sessions, which sources (claude-code, cowork), how many session groups.
Use the get_insights MCP tool to check for existing insights:
get_insights with status: "new" — unresolved insightsget_insights with status: "noted" — acknowledged but not acted onget_insights with status: "dismissed" — things the user doesn't want to see againNote which deduplication keys and categories to avoid repeating. If the MCP tool is unavailable, skip this step.
Call get_lists with pinned: true to load the user's current priorities and goals. These are used by the Decision Support analysis below.
If unavailable, skip goal-based analysis.
With all data gathered, analyze the parsed sessions and produce insights across four modules. For each insight, you MUST provide a concrete action — never just describe a problem without telling the user what to do about it.
Module A — Root Cause Diagnosis:
Look at tool failures in the session data. For each tool with a notable failure rate:
action type: run or action type: install)action type: acknowledge)Module B — Direct Automation:
Look for patterns that could be automated:
action type: install with settings.json patch)action type: install with skill content)Module C — Decision Support:
Using the session groups and the user's pinned goals from Step 3:
action type: decide)Module D — Knowledge Nudges:
Look for repeated topics across sessions:
action type: save)For each insight, produce:
{
"module": "root-cause | direct-automation | decision-support | knowledge-nudges",
"severity": "alert | action | suggestion | info",
"title": "One-line summary",
"observation": "What the data shows (with specific numbers)",
"diagnosis": "Why this is happening (optional)",
"action": {
"type": "install | run | save | review | decide | acknowledge"
},
"evidence": [{"metric": "specific numbers"}],
"effort": "low | medium | high",
"impact": "low | medium | high",
"confidence": 0.0-1.0,
"deduplicationKey": "unique-key-for-this-insight"
}
Action types:
install: { type: "install", artifact: "filename", content: "file content to install" }run: { type: "run", command: "command to run", explanation: "why" }save: { type: "save", content: "what to save", destination: "AI Brain / CLAUDE.md / etc" }review: { type: "review", summary: "what to look at", links: [] }decide: { type: "decide", question: "decision to make", options: ["option 1", "option 2"] }acknowledge: { type: "acknowledge", message: "FYI only, no action needed" }Aim for 5-10 total insights. Prioritize high-impact/low-effort actions. Skip insights that match dismissed deduplication keys from Step 2.
Write insights to a temp file and use the CLI to publish:
# Bundled binary (preferred)
node "${CLAUDE_PLUGIN_ROOT}/bin/workflow-analyzer/dist/cli.js" publish --insights /tmp/workflow-analyzer-insights.json
# Fallback if bin/ not available:
# npx @flippyhead/workflow-analyzer@latest publish --insights /tmp/workflow-analyzer-insights.json
The insights JSON file should contain:
{
"insights": [...],
"metadata": {
"period": { "since": "ISO date", "until": "ISO date" },
"sessionCount": N,
"sources": ["claude-code", "cowork"],
"modulesRun": ["root-cause", "direct-automation", "decision-support", "knowledge-nudges"]
}
}
Write this file before running the publish command. If publish fails, fall back to saving insights via create_report or capture_thought MCP tools.
If no brain MCP tools are available, skip publishing entirely — the terminal output from Step 6 is the primary output in terminal-only mode.
Output a brief summary: