From radar
Match the scout catalogue against your personal context — goals, usage patterns, current projects, and installed tools — to surface what you should be paying attention to.
npx claudepluginhub flippyhead/radar --plugin radarThis skill uses the workspace's default tool permissions.
Match the scout catalogue against your personal context — goals, usage patterns, current projects, and installed tools — to surface what you should be paying attention to.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Match the scout catalogue against your personal context — goals, usage patterns, current projects, and installed tools — to surface what you should be paying attention to.
$ARGUMENTS — Optional:
--days N — How far back to look at session history (default: 14)--focus <category> — Filter to a specific category (claude-code, mcp, api, agent-sdk, prompting, tooling, workflow, general-ai)Parse from $ARGUMENTS if provided.
Brain mode: Call get_lists and find all lists with names starting with [Radar] (excluding [Radar] Inbox). For each, call get_list and collect all items with status "open".
Local mode: Read ~/.claude/radar-catalogue.json and collect all items with status "open" across all lists (excluding Inbox).
If the catalogue is empty, tell the user: "No catalogue entries found. Run /radar-scan first to build your discovery catalogue."
If the catalogue file exists but cannot be parsed (corrupt JSON), print: "Catalogue file at [path] is corrupt. Delete it and re-run /radar-scan to rebuild."
If --focus was specified, filter items to only those with matching properties.category.
Pull from multiple sources. Each is optional — work with whatever is available.
Brain goals:
Call get_lists with pinned: true to get the user's stated goals and priorities. Extract goal titles and descriptions.
Brain thoughts:
Call browse_recent with a generous limit (e.g., 50) to get recent thoughts. Note: browse_recent does not support date filtering — it returns the N most recent thoughts regardless of date. Filter results client-side by checking each thought's creation date, keeping only those from the last 14 days. Note recurring topics and themes.
Session history:
Run: node "${CLAUDE_PLUGIN_ROOT}/bin/workflow-analyzer/dist/cli.js" parse --since ${DAYS} --output /tmp/discover-sessions.json
If the bundled binary is not available, fall back to npx @flippyhead/workflow-analyzer@latest parse --since ${DAYS} --output /tmp/discover-sessions.json.
Read the output file. If session history exceeds 50 sessions, summarize the top patterns:
Current environment:
~/.claude/settings.json for installed permissions and allowed tools.mcp.json files in the home directory and current project for installed MCP servers~/.claude/plugins/ for installed pluginsFor each open catalogue item, evaluate against the loaded context. Score on four dimensions:
Goal alignment (0-3):
Usage gap (0-3):
Recency (0-2):
Effort/impact (0-2):
Total score: 0-10. Skip items scoring below 3 — they don't connect to the user's context meaningfully.
Sort by total score descending. Group into tiers:
Act Now (score 7-10): Items with high relevance and low effort. Lead with what the user is doing that this improves. Format:
[Title] (score: N/10) You're [specific observation from session data or goals]. [This tool/feature] [specific benefit]. Next step: [concrete action — install command, link to try, config change]
Worth Exploring (score 5-6): Items with high relevance but higher effort. Format:
[Title] (score: N/10) Given your goal of [goal], this [what it does]. Worth a deeper look when [suggested timing]. Link: [url]
On Your Radar (score 3-4): Items with moderate relevance. Brief format:
[Title] — [one sentence on what it is and why it might matter] ([url])
Limit output to:
If no items score above 3, report: "Nothing in the current catalogue connects strongly to your goals and usage patterns. The catalogue may need more entries — try running /radar-scan or adding items to [Radar] Inbox."
Brain mode: Publish the recommendations as insights using create_report so they appear in the AI Brain insights UI. This is the primary output of discover — not just terminal text.
Call create_report with:
startDate / endDate: the period covered by session historysessionsAnalyzed, totalPrompts, totalToolCalls: from parsed session data (use 0 if session history unavailable)projectsActive: from session data (use empty array if unavailable)modelUsage: from session data (use empty object if unavailable)insights: array of recommendations, each with:
category: use "feature-discovery" for Act Now and Worth Exploring items, "ecosystem" for broader tools/techniquesobservation: what the data shows — cite the specific goal, usage pattern, or environment detail that triggered the matchrecommendation: the concrete action to takeevidence: include the score breakdown (goal alignment, usage gap, recency, effort/impact) and total scorelinks: array of {label, url} for the catalogue item URL and any related resourcesOnly publish items scoring 5+ (Act Now and Worth Exploring tiers). On Your Radar items are too low-signal for the insights UI — just mention them in the terminal summary.
Terminal-only mode: If brain MCP tools are unavailable, skip publishing. The terminal output from Step 4 is the primary output. Do not warn about brain being unavailable.
For each item that was recommended, update its properties to include:
lastRecommended: today's ISO datematchedGoals: array of goal titles it matched againstmatchedPatterns: array of usage patterns that triggered the matchBrain mode: The update_list_item MCP tool replaces the entire properties field — it does NOT merge. So you must: (1) read the item's existing properties from the get_list response, (2) merge the new keys (lastRecommended, matchedGoals, matchedPatterns) into the existing properties object client-side, (3) send the full merged object to update_list_item.
Local mode: Update the JSON file (same merge-then-write approach).
Items with lastRecommended within the last 14 days should be deprioritized (reduce score by 2) on subsequent runs to avoid re-surfacing the same recommendations.
Output a brief terminal summary: