By fcakyon
Audit ML/CV research papers against codebases, data, and experiments to verify claims, citations, numerical accuracy, and methodology. Design ablation studies with cost estimates, analyze datasets for bias, setup LaTeX environments, conduct literature gap analysis, and prepare repos for open-source release.
npx claudepluginhub fcakyon/phd-skills --plugin phd-skillsVerify BibTeX entries and cited claims against DBLP and web sources. Checks author names, venues, years, and specific numerical claims.
Select strongest ablations and anticipate reviewer questions. Reads paper and experiment results to prepare for peer review.
Literature gap analysis with web confirmation. Takes a research topic or question and maps existing work to find what's missing.
Show all phd-skills features, commands, skills, agents, and hooks at a glance. Use for in-session discoverability.
Interactive onboarding wizard — configure notifications, CLI allowlist, research-specific CLAUDE.md rules, and LaTeX environment.
Multi-dimensional paper audit — spawns 5 parallel sub-agents to check numerical accuracy, terminology consistency, code-paper alignment, citation accuracy, and evaluation integrity.
Analyze experiment results from any tracking system. Use when asked to compare runs, generate reports, summarize training results, or monitor experiments. Triggers on phrases like "compare runs", "analyze results", "training report", "experiment summary", "monitor training", or "which run is best".
Autonomous paper consistency verification. Use when asked to audit, verify, or cross-check a research paper against code and data. Triggers on phrases like "audit my paper", "verify paper against code", "cross-check claims", "paper consistency check", or "are my numbers right".
Use when the user wants to analyze dataset bias, create stratified samples, evaluate fairness, or plan dataset collection. Triggers on phrases like "dataset bias", "stratified sample", "class imbalance", "data distribution", "fairness analysis", or "ethical review".
Use when the user wants to design experiments, plan ablation studies, structure baselines, or create incremental evaluation strategies. Triggers on phrases like "design ablation", "plan experiment", "what experiments should I run", "baseline comparison", or "experiment matrix".
Use when the user wants to set up or troubleshoot a LaTeX environment, choose between biber and bibtex, install packages for a specific venue template, or configure compilation. Triggers on phrases like "setup latex", "biber vs bibtex", "latex compilation error", "install latex packages", "venue template", or "texlive setup".
Use when the user wants to find related work, survey a research area, identify literature gaps, or discover open-source implementations. Triggers on phrases like "find papers on", "related work", "literature review", "what papers exist", "open source implementation", or "papers with code".
Use when the user wants to verify paper claims against code or data, audit numerical accuracy, check formula-code alignment, or validate citation accuracy. Triggers on phrases like "verify claims", "check numbers", "do the numbers match", "formula vs code", "audit the paper", or "cross-check results".
Use when the user wants to write, structure, or revise academic paper sections, improve notation consistency, or refine figures and tables. Triggers on phrases like "write the abstract", "structure the methods", "improve this section", "notation consistency", "figure refinement", or "paper structure".
Use when the user wants to prepare code for open-source release, create reproducible research artifacts, or structure a repository for publication. Triggers on phrases like "publish code", "open source release", "reproducibility", "research repository", "code release", or "prepare for publication".
Use when the user wants to anticipate reviewer questions, select the strongest ablations to present, prepare rebuttals, or identify paper weaknesses before submission. Triggers on phrases like "reviewer questions", "anticipate reviewers", "rebuttal", "paper weaknesses", "defend the paper", or "strengthen the paper".
Oh My Paper research harness: memory system, Codex delegation, and pipeline commands for academic research projects.
Semi-automated research assistant for academic research and software development, with skills for literature review, experiments, analysis, writing, and project knowledge management
Production-grade academic research pipeline for Claude Code: research → write → review → revise → finalize. Ships 4 skills (deep-research, academic-paper, academic-paper-reviewer, academic-pipeline) covering 35+ modes, 32-agent ensemble, Material Passport handoff schema, v3.6.7 cross-model audit gate (synthesis + research-architect + report-compiler pattern protection layer), and v3.6.8 generator-evaluator contract for paper drafting.
PhD-level research capabilities: literature review, multi-source investigation, critical analysis, hypothesis-driven exploration, quantitative/qualitative methods, and lateral thinking
Guardrails your research workflow — checks hypotheses, catches known bugs, flags sloppy methodology.
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Share bugs, ideas, or general feedback.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim