From ttal
Conducts structured multi-source research on tasks with value stance, epistemic tagging (verified/interpretation/speculation/unknown), and actionable synthesis.
npx claudepluginhub tta-lab/ttal-cli --plugin ttalThis skill uses the workspace's default tool permissions.
Conduct structured, multi-source research that produces actionable findings. Research is **synthesis with a stance**, not aggregation. Every finding should connect to a "so what?" that helps the team make a decision.
Conducts systematic internet research via strategic questioning, multi-source analysis, credibility evaluation with tier system, and structured .research/ reports. Use for best practices, comparisons, technology evaluations, trends.
Orchestrates multi-agent research across web, codebase, and community sources for broad, mixed, or ambiguous analyze/investigate requests needing evidence synthesis.
Executes targeted web searches for technologies, libraries, best practices, and competitor solutions. Returns structured findings from 3+ diverse sources like docs, community forums, and blogs.
Share bugs, ideas, or general feedback.
Conduct structured, multi-source research that produces actionable findings. Research is synthesis with a stance, not aggregation. Every finding should connect to a "so what?" that helps the team make a decision.
Announce at start: "I'm using the research skill to investigate this."
Before touching any source, write down one sentence answering:
"My purpose is to calibrate our design intuition — or to copy features from prior art?"
If you can't answer, ask. Purpose-wrong research becomes anxiety-driven link-gathering — it fills the flicknote and drains trust.
Corollaries:
Research analysis is only as good as the foundations under it. Apply these rules in output, not just internally:
Tag every factual claim:
Core operating rules:
The test before sending any factual claim:
"If the reader fact-checks this sentence right now, will it hold?"
If not, verify first or flag the uncertainty explicitly.
ei ask (repos, URLs, projects), Context7 docs, and local source codeBefore starting research, align on what you're investigating.
ei ask passes; ei ask --async dispatched in parallel when the landscape is wide (15-25 jobs is normal for OSS/competitive surveys).ei ask --repo, Context7 for library docs, read source code directly when docs are unclear.Requests reshape mid-session. A "binary verdict: integrate or stay?" can sharpen into "cherry-pick handbook with 2-3 steals per candidate" once data is in.
Every research doc should have:
# Research: [Topic]
## Value Stance
[Calibrate design intuition / survey for copy / decision input / vocabulary handbook]
## Question
[What decision does this inform? What are we trying to learn?]
## Context
[Why this matters, what prompted the investigation]
## Findings
[Multi-source synthesis with claim tags where uncertainty matters]
## Trade-offs
[If comparing approaches: pros/cons of each]
## Recommendation
[Clear recommendation with reasoning. "We should X because Y."]
## Open Questions
[What we still don't know]
## Sources
- [Source name](url) — [license if OSS] — brief note on contribution
Skip sections that don't apply. Question, Findings, and Recommendation are always required. Value Stance is required when the research compares external options.
For surveys comparing external projects (cherry-pick handbook style):
If research is partial (ran out of time/tokens), annotate what you have and leave the task pending.
If research hits a dead end (unanswerable with available tools), annotate why and leave the task pending for review.
ei ask --repo for external, ei ask --project for internalei ask "question" --url <url> for specific pages