From example-skills
Systematic methodology for gathering, analyzing, and synthesizing research from multiple sources into coherent insights and actionable knowledge.
npx claudepluginhub organvm-iv-taxis/a-i--skills --plugin document-skillsThis skill uses the workspace's default tool permissions.
This skill provides a systematic methodology for conducting research, synthesizing findings from multiple sources, and producing actionable knowledge artifacts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
This skill provides a systematic methodology for conducting research, synthesizing findings from multiple sources, and producing actionable knowledge artifacts.
┌──────────────────────────────────────────────────────────────┐
│ Research Synthesis Workflow │
├──────────────────────────────────────────────────────────────┤
│ │
│ 1. SCOPE 2. GATHER 3. EXTRACT │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Define │─────▶│ Find │─────▶│ Capture │ │
│ │ Question│ │ Sources │ │ Insights│ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │
│ │ 5. PRODUCE 4. SYNTHESIZE │
│ │ ┌─────────┐ ┌─────────┐ │
│ └─────────▶│ Create │◀─────│ Connect │ │
│ │ Artifact│ │ Themes │ │
│ └─────────┘ └─────────┘ │
│ │
└──────────────────────────────────────────────────────────────┘
Transform vague topics into answerable questions:
| Type | Pattern | Example |
|---|---|---|
| Exploratory | What is X? How does X work? | What is vector search? |
| Comparative | How does X compare to Y? | PostgreSQL vs. Neo4j for graphs? |
| Evaluative | Is X effective for Y? | Is RAG effective for technical docs? |
| Causal | What causes X? What are effects of X? | What causes LLM hallucinations? |
| Prescriptive | How should we implement X? | How to design a RAG pipeline? |
Define explicitly:
## Research Scope: Vector Database Selection
### Research Question
Which vector database best fits our production RAG system
requiring <50ms latency at 10M+ vectors?
### In Scope
- Pinecone, Weaviate, Milvus, Qdrant, pgvector
- Latency benchmarks at scale
- Cost analysis (cloud vs self-hosted)
- Operational complexity
### Out of Scope
- General-purpose databases with vector extensions
- Sub-million vector use cases
- Academic/research-only systems
### Success Criteria
Recommendation with supporting evidence for 2-3 top candidates
Evaluate each source on:
| Criterion | High Quality | Low Quality |
|---|---|---|
| Authority | Expert author, peer-reviewed | Anonymous, no credentials |
| Currency | Recent, updated | Outdated, no dates |
| Accuracy | Citations, verifiable | Unsupported claims |
| Purpose | Inform, educate | Sell, persuade |
| Coverage | Comprehensive | Superficial |
Primary Sources (original)
├── Research papers
├── Official documentation
├── Benchmark data
└── Expert interviews
Secondary Sources (analysis)
├── Review articles
├── Technical blogs
├── Industry reports
└── Book chapters
Tertiary Sources (summaries)
├── Wikipedia
├── Textbooks
└── Encyclopedias
Keyword expansion:
Citation chaining:
Author tracking:
For each source, capture:
## Source: [Title]
- **URL/DOI**:
- **Author(s)**:
- **Date**:
- **Type**: [paper/blog/docs/report]
- **Quality Score**: [1-5]
- **Relevance**: [high/medium/low]
- **Key Topics**:
- **Notes**:
Use consistent templates for extraction:
## Claim: [Specific assertion]
- **Source**: [reference]
- **Evidence**: [supporting data/reasoning]
- **Strength**: [strong/moderate/weak]
- **My Assessment**: [agree/disagree/uncertain]
- **Related Claims**: [links to other notes]
| Type | Description | Weight |
|---|---|---|
| Empirical | Measured data, experiments | High |
| Analytical | Logical derivation | Medium-High |
| Anecdotal | Case studies, examples | Medium |
| Expert Opinion | Authority statements | Medium |
| Theoretical | Model predictions | Medium-Low |
When sources disagree:
## Conflict: [Topic]
### Position A: [Claim]
- Sources: [list]
- Evidence: [summary]
### Position B: [Claim]
- Sources: [list]
- Evidence: [summary]
### Analysis
- Methodological differences:
- Context differences:
- Possible resolution:
- My conclusion:
Codes Themes Findings
├─ fast queries ─┐
├─ low latency ─┼── Performance ─┬── Theme 1: Performance
├─ high throughput ─┘ │ varies significantly
├─ managed service ─┐ │ by workload type
├─ self-hosted ─┼── Deployment ─┼── Theme 2: Cloud vs
├─ kubernetes ─┘ │ self-hosted tradeoff
├─ pricing tiers ─┐ │
├─ compute costs ─┼── Economics ─┴── Theme 3: Total cost
├─ hidden costs ─┘ drives final choice
Create decision frameworks from synthesis:
## Vector Database Selection Framework
### Decision Tree
1. Scale requirement?
- <1M vectors → pgvector (simplicity)
- 1M-100M vectors → Continue to 2
- >100M vectors → Milvus/Weaviate (distributed)
2. Operational capacity?
- Limited DevOps → Pinecone (managed)
- Strong DevOps → Continue to 3
3. Cost sensitivity?
- Budget constrained → Qdrant (open source)
- Budget flexible → Evaluate all options
### Comparison Matrix
| Criterion | Weight | Pinecone | Milvus | Qdrant |
|----------------|--------|----------|--------|--------|
| Latency | 30% | 4 | 5 | 4 |
| Scalability | 25% | 5 | 5 | 4 |
| Operations | 20% | 5 | 3 | 4 |
| Cost | 15% | 2 | 4 | 5 |
| Features | 10% | 4 | 5 | 4 |
| **Weighted** | | **4.0** | **4.4**| **4.2**|
| Format | Purpose | Audience |
|---|---|---|
| Executive Summary | Quick decision support | Leadership |
| Technical Report | Detailed analysis | Engineers |
| Literature Review | Academic synthesis | Researchers |
| Decision Framework | Structured evaluation | Decision makers |
| Reference Guide | Quick lookup | Practitioners |
Executive Summary (1-2 pages):
Technical Report (5-20 pages):
Before finalizing:
Research is rarely linear:
references/evaluation-rubrics.md - Source quality scoring guidesreferences/synthesis-methods.md - Detailed synthesis techniquesreferences/artifact-templates.md - Document templates and examples