Multi-provider research lookup supporting Gemini Deep Research (60-min comprehensive analysis) and Perplexity Sonar (fast web-grounded research). Intelligently routes between providers based on research mode and query complexity. Supports balanced mode for optimal quality/time tradeoff.
npx claudepluginhub flight505/claude-project-planner --plugin claude-project-plannerThis skill is limited to using the following tools:
This skill provides **multi-provider research lookup** with intelligent routing between:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill provides multi-provider research lookup with intelligent routing between:
The skill automatically selects the best provider and model based on:
| Mode | Provider Selection | Best For | Total Plan Time |
|---|---|---|---|
balanced | Deep Research for Phase 1 analysis, Perplexity for others | Most projects (recommended) | ~90 min |
perplexity | Always use Perplexity | Quick planning, well-known tech | ~30 min |
deep_research | Always use Gemini Deep Research | Novel domains, high-stakes | ~4 hours |
auto | Automatic based on keywords/context | Let the system decide | Varies |
CRITICAL: You have a strict budget of 2 Deep Research queries per /full-plan session.
Deep Research is expensive (30-60 min per query, high API cost). Use it ONLY for:
Phase 1: Competitive Landscape/Analysis (Highest Priority)
Phase 2: Novel Architecture Decisions (Use Sparingly)
The system automatically tracks usage in planning_outputs/<project>/DEEP_RESEARCH_BUDGET.json. Falls back to Gemini Pro if budget exhausted.
Before using Deep Research, ask: Is this critical to project viability? Does it require 30-60 min multi-source analysis? Can Perplexity provide sufficient depth?
Remember: Perplexity has better temporal accuracy for 2026 data, so prefer it for time-sensitive queries even in Phase 1.
When creating documents with this skill, always consider adding diagrams to enhance visual communication.
Use the project-diagrams skill for system architecture, data flow, integration workflow, and process pipeline diagrams:
python .claude/skills/project-diagrams/scripts/generate_schematic.py "your diagram description" -o figures/output.png
# Basic usage with auto mode (context-aware selection)
python research_lookup.py "Your research query here"
# Specify research mode explicitly
python research_lookup.py "Competitive landscape for SaaS market" \
--research-mode deep_research
# Provide context for smart routing
python research_lookup.py "Latest PostgreSQL features" \
--research-mode balanced \
--phase 2 \
--task-type architecture-research
# Force specific Perplexity model
python research_lookup.py "Quick fact check" \
--research-mode perplexity \
--force-model pro
# Save output to file / JSON format
python research_lookup.py "your query" -o results.txt
python research_lookup.py "your query" --json -o results.json
For Perplexity (required for perplexity and balanced modes):
export OPENROUTER_API_KEY='your_openrouter_key'
For Gemini Deep Research (required for deep_research and balanced modes):
export GEMINI_API_KEY='your_gemini_key'
# Requires pay-as-you-go API ($19.99/month)
For long-running Deep Research operations (60+ minutes), comprehensive progress tracking and checkpoint capabilities are available.
# List all active research operations
python scripts/monitor-research-progress.py <project_folder> --list
# Monitor specific operation with live updates
python scripts/monitor-research-progress.py <project_folder> <task_id> --follow
If Deep Research is interrupted (network issues, timeout), resume from checkpoints:
# List resumable tasks with time estimates
python scripts/resume-research.py <project_folder> 1 --list
# Resume from checkpoint (saves up to 50 minutes)
python scripts/resume-research.py <project_folder> 1 --task <task_name>
Checkpoint Strategy: 15% (~9 min saved), 30% (~18 min saved), 50% (~30 min saved).
Key Features: Automatic checkpoints at milestones, graceful degradation (Deep Research to Perplexity fallback), error recovery with exponential backoff, external monitoring support.
See Also: docs/WORKFLOWS.md, scripts/enhanced_research_integration.py, scripts/resumable_research.py
| Model | Use Case | Context | Pricing | Speed |
|---|---|---|---|---|
Sonar Pro (perplexity/sonar-pro) | Straightforward lookup | 200K tokens | $3/1M prompt + $15/1M completion + $5/1K searches | Fast (5-15s) |
Sonar Reasoning Pro (perplexity/sonar-reasoning-pro) | Complex analytical queries | 128K tokens | $2/1M prompt + $8/1M completion + $5/1K searches | Slower (15-45s) |
Reasoning Keywords (triggers Sonar Reasoning Pro):
compare, contrast, analyze, evaluate, critiqueversus, vs, compared to, differences betweenmeta-analysis, systematic review, synthesismechanism, why, how does, explain, relationshipcontroversy, conflicting, paradox, debatepros and cons, advantages and disadvantages, trade-offScoring: Reasoning keywords = 3 pts each; Multiple questions = 2 pts per ?; Complex clauses = 1.5 pts; Long queries (>150 chars) = 1 pt. Threshold: >= 3 pts triggers Reasoning Pro.
Example Classifications:
Sonar Pro Search (straightforward):
Sonar Reasoning Pro (complex):
python research_lookup.py "your query" --force-model pro # Force Sonar Pro
python research_lookup.py "your query" --force-model reasoning # Force Reasoning Pro
[Topic] + [Specific Aspect] + [Time Frame] + [Type of Information]
Good Queries:
Poor Queries: "Tell me about AI" (too broad), "Cancer research" (lacks specificity)
For detailed query examples, capability descriptions, and paper quality standards, see references/query_guide.md.
Known Limitations: Information cutoff, paywall content, very recent unindexed papers, proprietary databases.
Fallback Strategies: Rephrase queries, break complex queries into simpler components, use broader time frames, cross-reference with multiple variations.
For provider-specific technical details, API configuration, performance/cost considerations, and complementary tool guidance, see references/provider_details.md.
This skill serves as a powerful research assistant with intelligent dual-model selection: