Web research via Perplexity AI. Use for technical comparisons (X vs Y), best practices, industry standards, documentation. Triggers on "research", "compare", "vs", "best practice", "which is better", "pros and cons".
From dev-toolsnpx claudepluginhub alexei-led/cc-thingz --plugin dev-toolsThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Default: Call Perplexity MCP directly. Only spawn agent when codebase context is explicitly needed.
Use this for 90% of research requests. When user says "ask Perplexity", "research", "look up", etc.:
mcp__perplexity-ask__perplexity_ask({
"messages": [{ "role": "user", "content": "Your research question" }]
})
This is fast, reliable, and what users expect.
Only use when user explicitly asks to compare research with their current code.
Trigger phrases that warrant agent:
Task(subagent_type="perplexity-researcher", prompt="Research: <topic>", run_in_background=true)
<!-- CC-ONLY: end -->
DO NOT use agent for:
After Perplexity returns results with citations:
# After Perplexity response with citations
WebFetch(url="<cited-url-1>", prompt="Extract key details about <topic>")
WebFetch(url="<cited-url-2>", prompt="Extract implementation examples")
Use reference following when:
## Summary
[Key findings - 2-3 sentences]
## Details
[Organized findings by topic]
## Recommendations
[Actionable items for the project]
## Sources
- [Source](url) - [what was learned]