From agentdb-search
Search with feature attributions — return WHY each match scored where it did. Use when debugging recall quality, auditing for bias, or explaining results to a user.
npx claudepluginhub ruvnet/agentdb --plugin agentdb-searchThis skill uses the workspace's default tool permissions.
Standard search returns scores; explainable recall returns *features* — which dimensions of the embedding (or which keywords in hybrid search) drove the match.
Diagnoses and improves Qdrant search relevance for low precision, recall, irrelevant results, or degradation after quantization, model changes, data growth. Covers embeddings, hybrid search, reranking, HNSW tuning.
Guides RAG evaluation: error analysis, synthetic QA/adversarial dataset building, Recall@k/Precision@k metrics for retrieval, faithfulness/relevance for generation, chunking optimization.
Share bugs, ideas, or general feedback.
Standard search returns scores; explainable recall returns features — which dimensions of the embedding (or which keywords in hybrid search) drove the match.
agentdb_explainable_recall(
query: <embedding | string>
k: 5
features: 'embedding-dims' | 'bm25-tokens' | 'hybrid-both' | 'metadata'
)
Returns: [
{
id, score,
explanation: {
topDims?: [{ dim: 12, contribution: 0.18 }, ...],
topTokens?: [{ token: "jwt", contribution: 0.31 }, ...],
metadataMatch?: { topic: 'auth', project: 'api' }
}
},
...
]
| Use | Features setting |
|---|---|
| Debug an unexpected high-score | embedding-dims — see which dims spiked |
| Verify keyword fall-back works | bm25-tokens — see if exact terms were the driver |
| Confirm metadata filters fired | metadata — see which filter values matched |
| Build user-facing UI | hybrid-both — show both text-level + dim-level signals |