npx claudepluginhub mathews-tom/armory --plugin armoryThis skill uses the workspace's default tool permissions.
Systematic RAG pipeline evaluation across the full retrieval-generation chain: designs
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Systematic RAG pipeline evaluation across the full retrieval-generation chain: designs evaluation query sets, measures retrieval metrics (Precision@K, Recall@K, MRR), evaluates generation quality (groundedness, completeness, hallucination rate), diagnoses component-level failures, and recommends targeted improvements.
| File | Contents | Load When |
|---|---|---|
references/retrieval-metrics.md | Precision@K, Recall@K, MRR, NDCG definitions and calculation | Always |
references/generation-metrics.md | Groundedness, completeness, hallucination detection methods | Generation evaluation needed |
references/failure-taxonomy.md | RAG failure categories: retrieval, generation, chunking, embedding | Failure diagnosis needed |
references/diagnostic-queries.md | Designing evaluation query sets, known-answer questions, difficulty levels | Evaluation setup |
Document the RAG pipeline configuration:
Create a diverse set of test queries:
| Query Type | Purpose | Count |
|---|---|---|
| Known-answer (factoid) | Measure retrieval + generation accuracy | 10+ |
| Multi-hop | Require combining info from multiple chunks | 5+ |
| Unanswerable | Not in the corpus — should abstain | 3+ |
| Ambiguous | Multiple valid interpretations | 3+ |
| Recent/updated | Test freshness | 2+ |
For each query, document the expected answer and the source chunk(s).
For each test query, measure:
For each test query with retrieved context:
For every incorrect or low-quality response, classify the root cause:
| Failure Type | Diagnosis | Indicator |
|---|---|---|
| Retrieval failure | Relevant chunks not retrieved | Low Recall@K |
| Ranking failure | Relevant chunk retrieved but ranked low | Low MRR, high Recall |
| Chunk boundary issue | Answer split across chunk boundaries | Partial matches in multiple chunks |
| Embedding mismatch | Query semantics don't match chunk embeddings | Relevant chunk has low similarity score |
| Generation failure | Correct context but wrong answer | High retrieval scores, low groundedness |
| Hallucination | Model invents facts not in context | Claims not traceable to any chunk |
| Over-abstention | Model refuses to answer when context is sufficient | Unanswered with relevant context present |
Based on failure analysis, recommend specific improvements:
| Failure Pattern | Recommendation |
|---|---|
| Chunk boundary issues | Increase overlap, try semantic chunking |
| Low Precision@K | Reduce K, add reranking stage |
| Low Recall@K | Increase K, try hybrid search |
| Embedding mismatch | Try different embedding model, add query expansion |
| Hallucination | Strengthen grounding instruction in prompt, reduce temperature |
| Over-abstention | Soften abstention criteria in prompt |
## RAG Audit Report
### Pipeline Configuration
| Component | Value |
|-----------|-------|
| Documents | {N} ({format}) |
| Chunking | {strategy}, {size} tokens, {overlap}% overlap |
| Embedding | {model} ({dimensions}d) |
| Retrieval | {method}, K={N} |
| Generation | {model}, temperature={T} |
### Evaluation Dataset
- **Total queries:** {N}
- **Known-answer:** {N}
- **Multi-hop:** {N}
- **Unanswerable:** {N}
### Retrieval Quality
| Metric | Score | Target | Status |
|--------|-------|--------|--------|
| Precision@{K} | {score} | {target} | {Pass/Fail} |
| Recall@{K} | {score} | {target} | {Pass/Fail} |
| MRR | {score} | {target} | {Pass/Fail} |
### Generation Quality
| Metric | Score | Target | Status |
|--------|-------|--------|--------|
| Groundedness | {score} | {target} | {Pass/Fail} |
| Completeness | {score} | {target} | {Pass/Fail} |
| Hallucination rate | {score} | {target} | {Pass/Fail} |
| Abstention accuracy | {score} | {target} | {Pass/Fail} |
### Failure Analysis
| # | Query | Failure Type | Root Cause | Recommendation |
|---|-------|-------------|------------|----------------|
| 1 | {query} | {type} | {cause} | {fix} |
### Recommendations (Priority Order)
1. **{Recommendation}** — addresses {N} failures, expected impact: {description}
2. **{Recommendation}** — addresses {N} failures, expected impact: {description}
### Sample Failures
#### Query: "{query}"
- **Expected:** {answer}
- **Retrieved chunks:** {chunk summaries with relevance scores}
- **Generated:** {response}
- **Issue:** {diagnosis}
| Problem | Resolution |
|---|---|
| No known-answer queries available | Help design them from the document corpus. Pick 10 facts and formulate questions. |
| Pipeline access not available | Work from recorded inputs/outputs. Post-hoc evaluation is possible with query-context-response triples. |
| Corpus is too large to review | Sample-based evaluation. Select representative documents and generate queries from them. |
| Multiple failure types co-exist | Address retrieval failures first. Generation quality cannot exceed retrieval quality. |
Push back if: