From ideation
Guides researchers through 10 ideation frameworks to generate high-impact research directions. Use for new problem spaces, project pivots, or fresh angles on existing work.
npx claudepluginhub yuuqq/ai-research-skills --plugin ideationThis skill uses the workspace's default tool permissions.
Structured frameworks for discovering the next research idea. This skill provides ten complementary ideation lenses that help researchers move from vague curiosity to concrete, defensible research proposals. Each framework targets a different cognitive mode—use them individually or combine them for comprehensive exploration.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Share bugs, ideas, or general feedback.
Structured frameworks for discovering the next research idea. This skill provides ten complementary ideation lenses that help researchers move from vague curiosity to concrete, defensible research proposals. Each framework targets a different cognitive mode—use them individually or combine them for comprehensive exploration.
Do NOT use this skill when:
scientific-skills:literature-review)Research ideas originate from two distinct modes. Knowing which mode you are in prevents a common failure: building solutions that lack real problems, or chasing problems without feasible approaches.
Problem-First (pain point → method):
Solution-First (new capability → application):
Workflow:
Self-Check:
Every research problem sits at a particular level of abstraction. Deliberately moving up or down the ladder reveals ideas invisible at your current level.
| Direction | Action | Outcome |
|---|---|---|
| Move Up (generalize) | Turn a specific result into a broader principle | Framework papers, theoretical contributions |
| Move Down (instantiate) | Test a general paradigm under concrete constraints | Empirical papers, surprising failure analyses |
| Move Sideways (analogize) | Apply same abstraction level to adjacent domain | Cross-pollination, transfer papers |
Workflow:
Example:
Breakthroughs often come from resolving tensions between widely accepted but seemingly conflicting goals. These contradictions are not bugs—they are the research opportunity.
Common Research Tensions:
| Tension Pair | Research Opportunity |
|---|---|
| Performance ↔ Efficiency | Can we match SOTA with 10x less compute? |
| Privacy ↔ Utility | Can federated/encrypted methods close the accuracy gap? |
| Generality ↔ Specialization | When does fine-tuning beat prompting, and why? |
| Safety ↔ Capability | Can alignment improve rather than tax capability? |
| Interpretability ↔ Performance | Do mechanistic insights enable better architectures? |
| Scale ↔ Accessibility | Can small models replicate emergent behaviors? |
Workflow:
Self-Check:
Borrowing structural ideas from other disciplines is one of the most generative research heuristics. Many foundational techniques emerged this way—attention mechanisms draw from cognitive science, genetic algorithms from biology, adversarial training from game theory.
Requirements for a Valid Analogy:
High-Yield Source Fields for ML Research:
| Source Field | Transferable Concepts |
|---|---|
| Neuroscience | Attention, memory consolidation, hierarchical processing |
| Physics | Energy-based models, phase transitions, renormalization |
| Economics | Mechanism design, auction theory, incentive alignment |
| Ecology | Population dynamics, niche competition, co-evolution |
| Linguistics | Compositionality, pragmatics, grammatical induction |
| Control Theory | Feedback loops, stability, adaptive regulation |
Workflow:
Strong ideas often come from revisiting old problems under new conditions. Advances in hardware, scale, data availability, or regulations can invalidate prior assumptions and make previously impractical approaches viable.
Categories of Change to Monitor:
| Change Type | Example | Research Implication |
|---|---|---|
| Compute | GPUs 10x faster | Methods dismissed as too expensive become feasible |
| Scale | Trillion-token datasets | Statistical arguments that failed at small scale may now hold |
| Regulation | EU AI Act, GDPR | Creates demand for compliant alternatives |
| Tooling | New frameworks, APIs | Reduces implementation barrier for complex methods |
| Failure | High-profile system failures | Exposes gaps in existing approaches |
| Cultural | New user behaviors | Shifts what problems matter most |
Workflow:
Understanding where a method breaks is often as valuable as showing where it works. Boundary probing systematically exposes the conditions under which accepted techniques fail.
Types of Boundaries to Probe:
Workflow:
Self-Check:
Before accepting complexity, ask whether a simpler approach suffices. Fields sometimes over-index on elaborate solutions when a streamlined baseline performs competitively.
Warning Signs of Unnecessary Complexity:
Workflow:
Contribution Framing:
Viewing a system from multiple perspectives reveals distinct classes of research questions. Each stakeholder sees different friction, risk, and opportunity.
Stakeholder Perspectives:
| Stakeholder | Key Questions |
|---|---|
| End User | Is this usable? What errors are unacceptable? What is the latency tolerance? |
| Developer | Is this debuggable? What is the maintenance burden? How does it compose? |
| Theorist | Why does this work? What are the formal guarantees? Where are the gaps? |
| Adversary | How can this be exploited? What are the attack surfaces? |
| Ethicist | Who is harmed? What biases are embedded? Who is excluded? |
| Regulator | Is this auditable? Can decisions be explained? Is there accountability? |
| Operator | What is the cost? How does it scale? What is the failure mode? |
Workflow:
Novelty often emerges from recombination or modularization. Innovation frequently lies not in new primitives, but in how components are arranged or separated.
Composition (combining existing techniques):
Decomposition (breaking apart monolithic systems):
Workflow:
A strong research idea should be defensible in two sentences to a smart non-expert. This test enforces clarity of purpose and sharpens the value proposition.
The Two-Sentence Template:
Sentence 1 (Problem): "[Domain] currently struggles with [specific problem], which matters because [concrete consequence]." Sentence 2 (Insight): "We [approach] by [key mechanism], which works because [reason]."
If You Cannot Fill This Template:
Calibration Questions:
Use this end-to-end workflow to go from blank page to ranked research ideas.
Goal: Produce 10-20 candidate ideas without filtering.
Goal: Narrow to 3-5 strongest ideas.
Apply these filters to each candidate:
| Filter | Question | Kill Criterion |
|---|---|---|
| Explain-It Test (F10) | Can I state this in two sentences? | If no → idea is not yet clear |
| Problem-First Check (F1) | Is the problem genuine and important? | If no one suffers from this → drop it |
| Simplicity Test (F7) | Is the complexity justified? | If a simpler approach works → simplify or drop |
| Stakeholder Check (F8) | Who benefits? Who might object? | If no clear beneficiary → drop it |
| Feasibility | Can I execute this with available resources? | If clearly infeasible → park it for later |
Goal: Turn the top idea into a concrete research plan.
Completion Checklist:
Not sure which framework to start with? Use this decision guide:
| Your Situation | Start With |
|---|---|
| "I don't know what area to work in" | Tension Hunting (F3) → What Changed (F5) |
| "I have a vague area but no specific idea" | Abstraction Ladder (F2) → Failure Analysis (F6) |
| "I have an idea but I'm not sure it's good" | Explain-It Test (F10) → Simplicity Test (F7) |
| "I have a good idea but need a fresh angle" | Cross-Pollination (F4) → Stakeholder Rotation (F8) |
| "I want to combine existing work into something new" | Composition/Decomposition (F9) |
| "I found a cool technique and want to apply it" | Problem-First Check (F1) → Stakeholder Rotation (F8) |
| "I want to challenge conventional wisdom" | Failure Analysis (F6) → Simplicity Test (F7) |
| Pitfall | Symptom | Fix |
|---|---|---|
| Novelty without impact | "No one has done X" but no one needs X | Apply Problem-First Check (F1) |
| Incremental by default | Idea is +2% on a benchmark | Climb the Abstraction Ladder (F2) |
| Complexity worship | Method has 8 components, each helping marginally | Apply Simplicity Test (F7) |
| Echo chamber | All ideas come from reading the same 10 papers | Use Cross-Pollination (F4) |
| Stale assumptions | "This was tried and didn't work" (5 years ago) | Apply What Changed (F5) |
| Single-perspective bias | Only considering the ML engineer's view | Use Stakeholder Rotation (F8) |
| Premature convergence | Committed to first idea without exploring alternatives | Run full Diverge phase |
When a researcher asks for help brainstorming research ideas:
Key Principles: