From radar
Build and maintain a catalogue of AI tools, features, and techniques from external sources. Runs independently of your personal context.
npx claudepluginhub flippyhead/radar --plugin radarThis skill uses the workspace's default tool permissions.
Build and maintain a catalogue of AI tools, features, and techniques from external sources. Runs independently of your personal context.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Build and maintain a catalogue of AI tools, features, and techniques from external sources. Runs independently of your personal context.
$ARGUMENTS — Optional:
--sources <all|feeds|manual> — Which sources to scan (default: all)--days N — How far back to look in time-based sources (default: 7)Parse from $ARGUMENTS if provided. Default to --sources all --days 7.
Check if the ai-brain MCP tools are available (try calling get_lists).
If brain MCP is available:
Call get_lists and look for lists with names starting with [Radar]. Create any that are missing:
[Radar] Inbox — raw links dropped by user for enrichment[Radar] Claude Code — features, settings, tips[Radar] MCP Ecosystem — servers, plugins, integrations[Radar] AI Tools & Techniques — broader tools, prompting, workflowsUse create_list for each missing list.
If brain MCP is unavailable:
Use local JSON file at ~/.claude/radar-catalogue.json. Read it if it exists, or initialize with empty structure.
If ~/.claude/radar-catalogue.json does not exist but ~/.claude/scout-catalogue.json does, rename ~/.claude/scout-catalogue.json to ~/.claude/radar-catalogue.json and use it.
{
"lists": {
"[Radar] Inbox": { "items": [] },
"[Radar] Claude Code": { "items": [] },
"[Radar] MCP Ecosystem": { "items": [] },
"[Radar] AI Tools & Techniques": { "items": [] }
},
"lastUpdated": null
}
Load all items from [Radar] lists to build a set of known URLs for deduplication.
Brain mode: Call get_list for each [Radar] list. Collect all item URLs into a set.
Local mode: Read from the JSON file.
Run node "${CLAUDE_PLUGIN_ROOT}/bin/workflow-analyzer/dist/cli.js" scan-deps --since ${DAYS} --output /tmp/workflow-analyzer-deps.json. Read the output JSON. If the bundled binary is not available, fall back to npx @flippyhead/workflow-analyzer@latest scan-deps --since ${DAYS} --output /tmp/workflow-analyzer-deps.json.
If the command fails or is not available, log a warning and skip to Step 3 — dependency scanning is additive, not required.
If GITHUB_TOKEN is not set in the environment and the scan-deps output shows rateLimited > 0, print: "GitHub API rate limited — scanned [reposResolved] of [packageCount] dependencies. Set GITHUB_TOKEN for full scanning."
For each entry in the releases array:
release.body (release notes) and repoDescription to assess relevancesource: "dependency-changelog" and an additional relevanceHint of "direct dependency"release.url as the item URL for deduplication against existing catalogueSkip this step if --sources manual was specified.
Limit to 10-15 items per source. If a source fails (timeout, rate limit, format change), log a warning and continue to the next source — never fail the entire run.
When a source fails, print a clear one-line message: "Source [name] unavailable: [reason]. Continuing with remaining sources."
Anthropic changelog/blog:
Use WebFetch on https://docs.anthropic.com/en/docs/about-claude/models and https://www.anthropic.com/news to find recent releases and feature announcements. Extract title, URL, and a one-line description for each.
Hacker News:
Use WebFetch on the Algolia API:
https://hn.algolia.com/api/v1/search_by_date?query=claude+code&tags=story&numericFilters=created_at_i>${DAYS_AGO_TIMESTAMP}https://hn.algolia.com/api/v1/search_by_date?query=anthropic+mcp&tags=story&numericFilters=created_at_i>${DAYS_AGO_TIMESTAMP}https://hn.algolia.com/api/v1/search_by_date?query=ai+agent+tool&tags=story&numericFilters=created_at_i>${DAYS_AGO_TIMESTAMP}Extract title, URL (use url field, fall back to HN comment URL), and points as a quality signal. Only keep items with 5+ points.
GitHub:
Use WebSearch for:
Extract repo name, URL, and description.
YouTube:
Use WebSearch for:
Extract video title, URL, and channel name.
For each result across all sources: skip if the URL already exists in the catalogue (deduplication).
Also deduplicate across sources within this run — if the same URL was found by both HN and GitHub in this run, only catalogue it once.
Skip this step if --sources feeds was specified.
Brain mode: Call get_list for the [Radar] Inbox list. For each item with status "open":
WebFetch to get the page contentdescription[Radar] category list using create_list_item with the enriched fieldsupdate_list_itemLocal mode: Process items in the Inbox array, move to the appropriate category array.
For each new catalogue entry (from Step 3 or Step 4), set the properties field:
{
"category": "<one of: claude-code, mcp, api, agent-sdk, prompting, tooling, workflow, general-ai>",
"relevanceHints": ["<free-text tags describing what workflows/goals this helps with>"],
"source": "<one of: anthropic-changelog, hackernews, github, youtube, manual, dependency-changelog>",
"discoveredAt": "<ISO date>"
}
Choose category based on the content:
claude-code — Claude Code features, settings, shortcuts, pluginsmcp — MCP servers, protocols, integrationsapi — Claude API features, SDK updatesagent-sdk — Agent building tools and frameworksprompting — Prompting techniques, system prompts, skill designtooling — Developer tools, CLI utilities, browser extensionsworkflow — Workflow patterns, automation techniques, productivity methodsgeneral-ai — Broader AI developments, models, researchChoose relevanceHints based on what kinds of work this would help with (e.g., "browser automation", "code review", "testing", "deployment", "project management").
Brain mode: Use create_list_item with url, description, and properties set.
Local mode: Append to the appropriate list in the JSON file. Write the file when done.
Output a brief summary:
[Radar] lists