Self-evolving deep research. Gets smarter every time you use it.
npx claudepluginhub 0xmariowu/autosearchSelf-evolving deep research. Gets smarter every time you use it. Searches across channels, synthesizes cited reports, and learns which queries and platforms work best.
No description available.
RuFlo Marketplace: Claude Code native agents, swarms, workers, and MCP tools for continuous software engineering
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Share bugs, ideas, or general feedback.
Research that gets smarter every time you use it.
34 search channels. Self-evolving queries. Cited reports. Zero API keys.
Website • Documentation • Quick Start • What Makes It Different • Self-Evolution • How It Works • Channels • Contributing
npm install -g @0xmariowu/autosearch
Then start a new Claude Code session and run:
/autosearch "compare vector databases for RAG applications"
curl -fsSL https://raw.githubusercontent.com/0xmariowu/autosearch/main/scripts/install.sh | bash
AutoSearch asks two questions — how deep and what format — then searches, evaluates, and delivers a cited report with real-time progress:
[Phase 1/6] Recall — 25 rubrics, 47 items recalled, 15 queries planned
[Phase 2/6] Search — 62 results from 12 channels
[Phase 3/6] Evaluate — 54 relevant, 3 gap queries
[Phase 4/6] Synthesize — report ready (38 citations)
[Phase 5/6] Rubrics — 23/25 rubrics passed
[Phase 6/6] Evolve — 4 patterns saved
| AutoSearch | Perplexity | Native Claude | |
|---|---|---|---|
| Search channels | 34 dedicated connectors | ~3 web engines | 1 (WebSearch) |
| Chinese sources | 12 native (zhihu, bilibili, 36kr, csdn...) | 0 | 0 |
| Academic sources | 6 (arXiv, Semantic Scholar, OpenReview, Papers with Code...) | 1 | 0 |
| Gets smarter over time | Yes — learns which queries and channels work | No | No |
| Every result cited | Yes (two-stage citation lock) | Yes (URL-level) | No |
| Reports | Markdown / Rich HTML / Slides | Web page | Plain text |
| Cost | Free (Claude Code plugin) | $20/month | Free |
| Integration | Native inside Claude Code | Separate tool | Built-in but limited |
This is the core idea. Most search tools run the same strategy every time. AutoSearch learns from every session and gets measurably better.
How it works: after each search, AutoSearch records which queries found relevant results and which returned nothing. Next time, it skips what failed and doubles down on what worked. Over sessions, it builds a profile of which channels are useful for which types of topics.
What it looks like in practice:
Session 1: "vector databases for RAG"
→ Searched 15 channels, 8 had results
→ Learned: arxiv + github-repos are high-yield for this topic
→ Learned: producthunt and crunchbase returned nothing useful
→ Saved 3 winning query patterns
Session 2: same topic, 2 weeks later
→ Auto-skipped channels that failed last time
→ Reused winning query patterns, added freshness filter
→ Found 12 new results the first session missed
→ Score improved: 0.65 → 0.78
Session 3: different topic ("AI agent frameworks")
→ Applied cross-topic patterns (arxiv query structure, github star filter)
→ Reached 0.71 on first attempt (vs 0.58 baseline)
The safety mechanism: the evaluator (judge.py) is fixed and cannot be modified by evolution. Only search strategy evolves — not the scoring. This prevents the system from gaming its own metrics.
You: /autosearch "topic"
│
▼
[1] Claude recalls what it already knows → maps 9 knowledge dimensions
│
[2] Identifies gaps → generates queries ONLY for what Claude doesn't know
│
[3] Searches 34 channels in parallel (10-30 seconds)
│
[4] LLM evaluates each result for relevance, filters noise
│
[5] Synthesizes report with two-stage citation lock
│
[6] Checks quality rubrics → evolves strategy → commits improvements