Multi-provider research lookup supporting Gemini Deep Research (60-min comprehensive analysis) and Perplexity Sonar (fast web-grounded research). Intelligently routes between providers based on research mode and query complexity. Supports balanced mode for optimal quality/time tradeoff.
Conducts comprehensive web research using intelligent routing between fast and deep analysis providers based on query complexity.
/plugin marketplace add flight505/claude-project-planner/plugin install claude-project-planner@claude-project-plannerThis skill is limited to using the following tools:
This skill provides multi-provider research lookup with intelligent routing between:
The skill automatically selects the best provider and model based on:
CRITICAL: You have a strict budget of 2 Deep Research queries per /full-plan session.
Deep Research is expensive (30-60 min per query, high API cost). Use it ONLY for:
Phase 1: Competitive Landscape/Analysis (Highest Priority)
Phase 2: Novel Architecture Decisions (Use Sparingly)
❌ Version checks or feature comparisons (use Perplexity) ❌ Pricing lookups or cost estimates (use Perplexity) ❌ Quick technical documentation (use Perplexity) ❌ Simple "what is X" queries (use Gemini Flash/Perplexity) ❌ Phases 3-6 research (use Perplexity - better temporal accuracy)
Conservative (Recommended):
Aggressive (High-Stakes Projects):
The system automatically tracks your Deep Research usage:
planning_outputs/<project>/DEEP_RESEARCH_BUDGET.json contains the budget state⚠️ 1/2 Deep Research queries usedBefore using Deep Research, ask yourself:
Remember: Perplexity has better temporal accuracy for 2026 data, so prefer it for time-sensitive queries even in Phase 1.
Use this skill when you need:
When creating documents with this skill, always consider adding diagrams to enhance visual communication.
If your document does not already contain diagrams:
For project planning documents: Diagrams should be generated by default to visually represent system architectures, workflows, data flows, or relationships described in the text.
How to generate schematics:
python .claude/skills/project-diagrams/scripts/generate_schematic.py "your diagram description" -o figures/output.png
The AI will automatically:
When to add diagrams:
For detailed guidance on creating diagrams, refer to the project-diagrams skill documentation.
# Basic usage with auto mode (context-aware selection)
python research_lookup.py "Your research query here"
# Specify research mode explicitly
python research_lookup.py "Competitive landscape for SaaS market" \
--research-mode deep_research
# Provide context for smart routing
python research_lookup.py "Latest PostgreSQL features" \
--research-mode balanced \
--phase 2 \
--task-type architecture-research
# Force specific Perplexity model
python research_lookup.py "Quick fact check" \
--research-mode perplexity \
--force-model pro
| Mode | Provider Selection | Best For |
|---|---|---|
balanced | Deep Research for Phase 1 analysis, Perplexity for others | Most projects (recommended) |
perplexity | Always use Perplexity | Quick planning, well-known tech |
deep_research | Always use Gemini Deep Research | Novel domains, high-stakes |
auto | Automatic based on keywords/context | Let the system decide |
Phase-based routing:
--phase 1 with --task-type competitive-analysis → triggers Deep Research in balanced/auto modes--phase 2 with keywords like "architecture decision" → may trigger Deep ResearchExample in planning workflow:
# Phase 1: Competitive analysis (use Deep Research)
python research_lookup.py "Comprehensive competitive analysis for task management SaaS" \
--research-mode balanced \
--phase 1 \
--task-type competitive-analysis
# Phase 2: Quick tech lookup (use Perplexity)
python research_lookup.py "Latest React best practices 2026" \
--research-mode balanced \
--phase 2 \
--task-type research-lookup
For Perplexity (required for perplexity and balanced modes):
export OPENROUTER_API_KEY='your_openrouter_key'
For Gemini Deep Research (required for deep_research and balanced modes):
export GEMINI_API_KEY='your_gemini_key'
# Requires Google AI Pro subscription ($19.99/month)
For long-running Deep Research operations (60+ minutes), the plugin provides comprehensive progress tracking and checkpoint capabilities.
When research operations take longer than 30 seconds, progress tracking is automatically enabled:
Tier 1: Streaming Progress (Perplexity ~30s)
Tier 2: Progress Files (Deep Research ~60 min)
Monitor long-running research from a separate terminal:
# List all active research operations
python scripts/monitor-research-progress.py <project_folder> --list
# Monitor specific operation with live updates
python scripts/monitor-research-progress.py <project_folder> <task_id> --follow
# Example output:
# [14:23:45] 🔄 [████████████░░░░░] 30% | analyzing: Cross-referencing...
# [14:38:12] 🔄 [████████████████░░] 50% | synthesizing: Results...
# [14:52:30] ✅ [████████████████████] 100% | Complete!
If Deep Research is interrupted (network issues, timeout), resume from checkpoints:
# List resumable tasks with time estimates
python scripts/resume-research.py <project_folder> 1 --list
# Resume from checkpoint (saves up to 50 minutes)
python scripts/resume-research.py <project_folder> 1 --task <task_name>
Checkpoint Strategy:
For Python API usage with full progress tracking:
import asyncio
from pathlib import Path
from enhanced_research_integration import EnhancedResearchLookup
async def main():
# Initialize with progress tracking
research = EnhancedResearchLookup(
project_folder=Path("planning_outputs/20260115_my-project"),
phase_num=1,
research_mode="balanced" # or "quick", "deep_research", "auto"
)
# Execute with automatic progress tracking and checkpoints
result = await research.research_with_progress(
task_name="competitive-analysis",
query="Comprehensive competitive landscape analysis",
estimated_duration_sec=3600 # Auto-detected if not provided
)
# Access results and statistics
print(f"Success: {result['success']}")
print(f"Provider: {result['provider']}")
print(f"Sources: {len(result.get('sources', []))}")
# View execution statistics
stats = research.get_stats()
print(f"Tasks completed: {stats['tasks_completed']}")
print(f"Time saved: {stats['total_time_saved_min']} minutes")
asyncio.run(main())
Key Features:
See Also:
docs/WORKFLOWS.md - Complete workflow examples with dual-terminal monitoringscripts/enhanced_research_integration.py - Integration layer implementationscripts/resumable_research.py - Core resumable research executorSearch Academic Literature: Query for recent papers, studies, and reviews in specific domains:
Query Examples:
- "Recent advances in CRISPR gene editing 2024"
- "Latest clinical trials for Alzheimer's disease treatment"
- "Machine learning applications in drug discovery systematic review"
- "Climate change impacts on biodiversity meta-analysis"
Expected Response Format:
Protocol and Method Lookups: Find detailed procedures, specifications, and methodologies:
Query Examples:
- "Western blot protocol for protein detection"
- "RNA sequencing library preparation methods"
- "Statistical power analysis for clinical trials"
- "Machine learning model evaluation metrics"
Expected Response Format:
Research Statistics: Look up current statistics, survey results, and research data:
Query Examples:
- "Prevalence of diabetes in US population 2024"
- "Global renewable energy adoption statistics"
- "COVID-19 vaccination rates by country"
- "AI adoption in healthcare industry survey"
Expected Response Format:
Citation Finding: Locate the most influential, highly-cited papers from reputable authors and prestigious venues:
Query Examples:
- "Foundational papers on transformer architecture" (expect: Vaswani et al. 2017 in NeurIPS, 90,000+ citations)
- "Seminal works in quantum computing" (expect: papers from Nature, Science by leading researchers)
- "Key studies on climate change mitigation" (expect: IPCC-cited papers, Nature Climate Change)
- "Landmark trials in cancer immunotherapy" (expect: NEJM, Lancet trials with 1000+ citations)
Expected Response Format:
Quality Criteria for Citation Selection:
This skill features intelligent model selection based on query complexity:
1. Sonar Pro (perplexity/sonar-pro)
2. Sonar Reasoning Pro (perplexity/sonar-reasoning-pro)
The skill automatically detects query complexity using these indicators:
Reasoning Keywords (triggers Sonar Reasoning Pro):
compare, contrast, analyze, analysis, evaluate, critiqueversus, vs, vs., compared to, differences between, similaritiesmeta-analysis, systematic review, synthesis, integratemechanism, why, how does, how do, explain, relationship, causal relationship, underlying mechanismtheoretical framework, implications, interpret, reasoningcontroversy, conflicting, paradox, debate, reconcilepros and cons, advantages and disadvantages, trade-off, tradeoff, trade offsmultifaceted, complex interaction, critical analysisComplexity Scoring:
Practical Result: Even a single strong reasoning keyword (compare, explain, analyze, etc.) will trigger the more powerful Sonar Reasoning Pro model, ensuring you get deep analysis when needed.
Example Query Classification:
✅ Sonar Pro Search (straightforward lookup):
✅ Sonar Reasoning Pro (complex analysis):
You can force a specific model using the force_model parameter:
# Force Sonar Pro Search for fast lookup
research = ResearchLookup(force_model='pro')
# Force Sonar Reasoning Pro for deep analysis
research = ResearchLookup(force_model='reasoning')
# Automatic selection (default)
research = ResearchLookup()
Command-line usage:
# Force Sonar Pro Search
python research_lookup.py "your query" --force-model pro
# Force Sonar Reasoning Pro
python research_lookup.py "your query" --force-model reasoning
# Automatic (no flag)
python research_lookup.py "your query"
# Save output to a file
python research_lookup.py "your query" -o results.txt
# Output as JSON (useful for programmatic access)
python research_lookup.py "your query" --json
# Combine: JSON output saved to file
python research_lookup.py "your query" --json -o results.json
This skill integrates with OpenRouter (openrouter.ai) to access Perplexity's Sonar models:
Model Specifications:
perplexity/sonar-pro (fast lookup, 200K context)perplexity/sonar-reasoning-pro (deep analysis with DeepSeek R1, 128K context)high search context for deeper, more comprehensive research resultsAPI Requirements:
OPENROUTER_API_KEY environment variable)Python Dependencies (for CLI usage):
If using the research_lookup.py script directly, install dependencies:
pip install requests
# Or install all plugin dependencies:
pip install -r requirements.txt
Academic Mode Configuration:
Source Verification: The skill prioritizes:
Citation Standards: All responses include:
CRITICAL: When searching for papers, ALWAYS prioritize high-quality, influential papers over obscure or low-impact publications. Quality matters more than quantity.
Prioritize papers based on citation count relative to their age:
| Paper Age | Citation Threshold | Classification |
|---|---|---|
| 0-3 years | 20+ citations | Noteworthy |
| 0-3 years | 100+ citations | Highly Influential |
| 3-7 years | 100+ citations | Significant |
| 3-7 years | 500+ citations | Landmark Paper |
| 7+ years | 500+ citations | Seminal Work |
| 7+ years | 1000+ citations | Foundational |
When reporting citations: Always indicate approximate citation count when known (e.g., "cited 500+ times" or "highly cited").
Prioritize papers from higher-tier venues:
Tier 1 - Premier Venues (Always prefer):
Tier 2 - High-Impact Specialized (Strong preference):
Tier 3 - Respected Specialized (Include when relevant):
Tier 4 - Other Peer-Reviewed (Use sparingly):
Prefer papers from established, reputable researchers:
Always prioritize papers that directly address the research question:
When conducting research lookups:
Example Quality-Focused Query Response:
Key findings from high-impact literature:
1. Smith et al. (2023), Nature Medicine (IF: 82.9, cited 450+ times)
- Senior author: Prof. John Smith, Harvard Medical School
- Key finding: [finding]
2. Johnson & Lee (2024), Cell (IF: 64.5, cited 120+ times)
- From the renowned Lee Lab at Stanford
- Key finding: [finding]
3. Chen et al. (2022), NEJM (IF: 158.5, cited 890+ times)
- Landmark clinical trial (N=5,000)
- Key finding: [finding]
For Simple Lookups (Sonar Pro Search):
For Complex Analysis (Sonar Reasoning Pro):
Pro Tip: The automatic selection is optimized for most use cases. Only use force_model if you have specific requirements or know the query needs deeper reasoning than detected.
Good Queries (will trigger appropriate model):
Poor Queries:
Recommended Structure:
[Topic] + [Specific Aspect] + [Time Frame] + [Type of Information]
Examples:
Effective Follow-ups:
This skill enhances project planning by providing:
Known Limitations:
Error Conditions:
Fallback Strategies:
Query: "Recent advances in transformer attention mechanisms 2024"
Model Selected: Sonar Pro Search (straightforward lookup)
Response Includes:
Query: "Compare and contrast the advantages and limitations of transformer-based models versus traditional RNNs for sequence modeling"
Model Selected: Sonar Reasoning Pro (complex analysis required)
Response Includes:
Query: "Standard protocols for flow cytometry analysis"
Model Selected: Sonar Pro Search (protocol lookup)
Response Includes:
Query: "Explain the underlying mechanism of how mRNA vaccines trigger immune responses and why they differ from traditional vaccines"
Model Selected: Sonar Reasoning Pro (requires causal reasoning)
Response Includes:
Query: "Global AI adoption in healthcare statistics 2024"
Model Selected: Sonar Pro Search (data lookup)
Response Includes:
Sonar Pro Search:
Sonar Reasoning Pro:
Automatic Selection Benefits:
Manual Override Use Cases:
Best Practices:
Responsible Use:
Academic Integrity:
In addition to research-lookup, the project planner has access to WebSearch for:
When to use which tool:
| Task | Tool |
|---|---|
| Find academic papers | research-lookup |
| Literature search | research-lookup |
| Deep analysis/comparison | research-lookup (Sonar Reasoning Pro) |
| Look up DOI/metadata | WebSearch |
| Verify publication year | WebSearch |
| Find journal volume/pages | WebSearch |
| Current events/news | WebSearch |
| Non-scholarly sources | WebSearch |
This skill serves as a powerful research assistant with intelligent dual-model selection:
Whether you need quick fact-finding or deep analytical synthesis, this skill automatically adapts to deliver the right level of research support for your project planning needs.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.