Validate bibliographic references in Markdown documents against authoritative sources like journals, preprints, books, webpages, and reports by confirming authors, titles, years, DOIs, arXiv IDs, and ISBNs via web searches. Generate standardized issue reports with title, description, severity, line numbers, and suggested actions for consistent document reviews.
npx claudepluginhub agencyenterprise/draft-detective --plugin skillsUse this skill whenever you need to report document review issues. It defines the standard issue format — field names, types, severity levels, line-number conventions, and best practices — used across all agent workflows in this project.
Use this skill to validate a bibliographic reference (journal article, preprint, book, webpage, press release, government report, etc.) by searching for it online and comparing the citation to authoritative sources. Invoke when the user asks to check, verify, fact-check, or validate a citation or list of references — confirming author, title, publisher, year, and identifier (DOI / arXiv ID / ISBN / ISSN) against the actual published work.
AI-powered assistant for academic peer review. Built with LangGraph, this tool validates references against claims, flags unsupported assertions, performs literature reviews, and suggests relevant citations — helping reviewers and researchers assess rigor more efficiently.
Note: This project is under active development and not yet ready for production use. The authors will continue to update this repository with the latest work and evaluation results.
Project funded by RAND: https://rand.org/
The main goal of Draft Detective is to assist and streamline the academic peer review process by reducing manual workload and improving the consistency, transparency, and rigor of evaluations.

For detailed development setup instructions, see DEVELOPMENT.md.
Tests are organized by type:
tests/unit/ - Fast, isolated unit teststests/integration/ - Multi-component integration testsevals_inspectai/ - LLM-based evaluations using Inspect AI# Run standard tests (default)
uv run pytest
# Run evaluations (see evals_inspectai/ for available eval suites)
uv run inspect eval evals_inspectai/e2e/reference_validation/reference_validation_e2e.py
See LICENSE file
Multi-agent orchestrator for academic writing: 12 specialist agents and 30 writing principles for review, research, drafting, polishing, bibliography auditing, and literature surveys.
Research integrity plugin for Claude Code — paper auditing, citation verification, experiment analysis, and methodology-first skills for academic workflows.
Verify and validate BibTeX references against CrossRef metadata. Finds uncited entries and flags discrepancies in title/author/journal/volume/pages/year.
Production-grade academic research pipeline for Claude Code: research → write → review → revise → finalize. Ships 4 skills (deep-research, academic-paper, academic-paper-reviewer, academic-pipeline) covering 35+ modes, 32-agent ensemble, Material Passport handoff schema, v3.6.7 cross-model audit gate (synthesis + research-architect + report-compiler pattern protection layer), and v3.6.8 generator-evaluator contract for paper drafting.
Share bugs, ideas, or general feedback.
Simulate peer review by constructing reviewer personas from Zotero sources. Identifies relevant perspectives, retrieves full texts, builds reviewer profiles, and generates focused reviews on theory/methods and findings.
Semi-automated research assistant for academic research and software development, with skills for literature review, experiments, analysis, writing, and project knowledge management
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim