From armory
Conducts systematic literature reviews: defines scope, searches arXiv/Semantic Scholar/Google Scholar, screens papers, extracts data, synthesizes findings, identifies gaps. For research surveys.
npx claudepluginhub mathews-tom/armory --plugin armoryThis skill uses the workspace's default tool permissions.
Systematic discovery, extraction, and synthesis of academic research on a defined topic.
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Systematic discovery, extraction, and synthesis of academic research on a defined topic.
| Need | Skill |
|---|---|
| Survey a research area, synthesize multiple papers | literature-review (this skill) |
| Critique a single paper's methodology and claims | research-critique |
| Audit a manuscript's formatting, structure, citations | manuscript-review |
| Verify a manuscript's numbers trace to code | manuscript-provenance |
| Search arXiv for papers matching a query | arxiv-search (utility) |
Before searching, establish the review boundaries:
Present the scope to the user for confirmation before proceeding.
Execute searches across available sources. Use multiple queries with varying specificity to avoid single-query blind spots.
Primary source: arXiv (via arxiv-search utility)
uv run --with arxiv python scripts/arxiv_search.py "QUERY" --max-results 30 --sort-by relevance
Vary queries systematically:
"retrieval augmented generation"ti:retrieval AND abs:generation AND cat:cs.CLau:lewis AND abs:retrieval (when key authors are known)--sort-by submittedSecondary sources (via web search/fetch):
https://api.semanticscholar.org/graph/v1/paper/search?query=QUERY&limit=20&fields=title,authors,abstract,year,citationCount,externalIdssite:scholar.google.com QUERYhttps://www.connectedpapers.com/search?q=QUERYSnowball strategy:
For each discovered paper, apply the inclusion/exclusion criteria from Phase 1.
Produce a screening table:
| # | ID | Title | Authors | Year | Relevant? | Reason |
|---|---|---|---|---|---|---|
| 1 | 2301.07041 | Paper Title | Author et al. | 2023 | Yes | Directly addresses RQ |
| 2 | 2302.12345 | Other Paper | Author B | 2023 | No | Tangential — focuses on X not Y |
Rules:
For each included paper, extract a structured record:
- id: "2301.07041"
title: "Paper Title"
authors: ["Author One", "Author Two"]
year: 2023
venue: "NeurIPS 2023"
research_question: "How does X affect Y?"
methodology: "Controlled experiment with N=1000"
key_findings:
- "Finding 1 with quantitative result"
- "Finding 2 with effect size"
limitations: "Single-domain evaluation"
relevance_to_rq: "Directly compares methods A and B on metric Y"
citation_count: 142
Extraction discipline:
pdf_url, read it for extraction. Do not extract from abstracts alone for included papers.Transform extracted records into structured analysis. The synthesis method depends on the output format requested in Phase 1.
Thematic synthesis — Group papers by theme, approach, or finding:
Comparative synthesis — Build comparison tables:
| Method | Paper(s) | Dataset | Metric | Result | Limitations |
|---|---|---|---|---|---|
| Method A | [1], [3] | D1 | F1 | 0.85 | Domain-specific |
| Method B | [2], [4] | D1, D2 | F1 | 0.82 | Requires X |
Chronological synthesis — Map the evolution of the field:
Gap analysis — Identify what is missing:
Produce the review document in the format specified in Phase 1.
Standard structure for a narrative review:
Standard structure for a tabular review:
Citation format:
Default to Author et al. (Year) in-text with full references at the end. Adapt to the user's specified format (APA, Chicago, IEEE) if requested.
Before delivering the review, verify:
| Situation | Adaptation |
|---|---|
| Very few papers found (<5) | The field may be nascent. Note this explicitly. Broaden search terms or check if the topic goes by different terminology. Consider adjacent fields. |
| Too many papers found (>100) | Tighten inclusion criteria. Consider limiting to top venues, recent years, or high-citation papers. Produce a scoping review rather than exhaustive review. |
| User provides a paper list instead of a topic | Skip Phase 2 (Search). Start from Phase 3 (Screening) with the provided list. |
| User wants a related-work section for their own paper | Tailor synthesis to position the user's contribution. Organize by approaches the user's work builds on, alternatives it competes with, and gaps it fills. |
| No full-text access to key papers | Extract from abstracts and note the limitation. Do not fabricate methodology details. Flag which papers were abstract-only in the extraction records. |
| Interdisciplinary topic | Search across multiple category prefixes. Note when different fields use different terminology for the same concept. |
| User asks for a "quick" literature review | Reduce Phase 2 to a single search query, Phase 3 to title-only screening, Phase 4 to abstract-only extraction. Label the output as a preliminary survey, not a systematic review. |