Catch AI mistakes before they cost weeks of compute. Reproduce papers from arxiv. Debug runs evidence-first. Compare experiments at the right epoch. Launch with discipline.
npx claudepluginhub fcakyon/phd-skillsResearch integrity plugin for Claude Code — paper auditing, citation verification, experiment analysis, and methodology-first skills for academic workflows.
RuFlo Marketplace: Claude Code native agents, swarms, workers, and MCP tools for continuous software engineering
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Share bugs, ideas, or general feedback.
Research integrity plugin for Claude Code — paper auditing, citation verification, experiment analysis, and methodology-first skills for academic workflows.
Built by Fatih Cagatay Akyon (1300+ citations, 7 patents) after 200+ Claude Code sessions, tens of critical AI mistakes caught the hard way, and thousands of hours of PhD research. Every guardrail in this plugin traces to a real mistake.
I use Claude Code daily for my PhD. It's powerful, but it makes research-specific mistakes that cost hours:
Other plugins give you more commands. This plugin gives you guardrails.
claude plugin marketplace add fcakyon/phd-skills
claude plugin install phd-skills@phd-skills
Then run /phd-skills:setup inside Claude Code to configure notifications, LaTeX, and allowlist.
Open Claude Code in your paper directory, then:
/phd-skills:xray — audit paper against code and data across 5 dimensions, get prioritized fixes/phd-skills:factcheck — verify all BibTeX entries and cited claims against DBLP/phd-skills:fortify CVPR — anticipate reviewer questions, rank ablations, and suggest paper improvements/phd-skills:gaps neural architecture search — find what's missing in the literature/loop 30m check experiment logs, notify me if metrics beat the baseline or if loss starts to diverge"check if my numbers match the code" — skills auto-trigger, no slash command needed"make code publish ready" — prepares code for open-source release with license, docs, and reproducibility checksAfter running /phd-skills:setup, all Claude Code notifications (task completion,
background agents) are forwarded to your configured service (ntfy/Slack/email).
| Command | What it does |
|---|---|
/phd-skills:xray | Audit paper against code and data (5 parallel dimensions) |
/phd-skills:factcheck | Verify BibTeX entries and cited claims against DBLP |
/phd-skills:gaps <topic> | Literature gap analysis with web confirmation |
/phd-skills:fortify [venue] | Select strongest ablations + anticipate reviewer questions |
/phd-skills:setup | Interactive onboarding (notifications, allowlist, LaTeX) |
/phd-skills:help | Show all features at a glance |
| When you say... | Skill activates |
|---|---|
| "design an ablation study" | Experiment Design |
| "find related papers on X" | Literature Research |
| "review my methods section for consistency" | Paper Verification |
| "check if my numbers match the code" | Paper Verification |
| "analyze dataset bias" | Dataset Curation |
| "prepare code for open-source release" | Research Publishing |
| "what will reviewers ask about this?" | Reviewer Defense |
| "setup latex for CVPR" | LaTeX Setup |
| Agent | What it does | Special |
|---|---|---|
paper-auditor | Cross-checks paper claims vs code and data | Runs in isolated worktree, remembers patterns across sessions |
experiment-analyzer | Analyzes results from wandb/neptune/local/any format | Can schedule monitoring via cron, sends SSH notifications |