Academic expert analysis skills — CS, Database, Statistics, Data Science, Distributed Systems, PL Theory, Security & Cryptography perspectives for any codebase. Multilingual output via config
npx claudepluginhub jfk/claude-phd-panel-pluginAcademic expert perspectives for any codebase — CS, Database, Statistics, Data Science, Distributed Systems, PL Theory, Security & Cryptography reviews
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Curated collection of 141 specialized Claude Code subagents organized into 10 focused categories
Share bugs, ideas, or general feedback.
Academic expert perspectives for any codebase. Six specialized review commands that analyze your project from different research domain viewpoints.
| Command | Role | What it reviews |
|---|---|---|
/cs | Computer Science PhD | Algorithm complexity, data structures, concurrency correctness |
/db | Database Theory PhD | Schema normalization, query optimization, consistency models |
/stats | Statistics PhD | A/B test design, metric validity, statistical significance |
/ds | Data Science PhD | ML pipelines, feature engineering, model evaluation |
/dist-sys | Distributed Systems PhD | Consensus, fault tolerance, partition handling |
/pl | PL Theory PhD | Type safety, abstraction design, error handling patterns |
/plugin marketplace add JFK/claude-phd-panel-plugin
/plugin install claude-phd-panel
Run any command to get a full academic review:
/claude-phd-panel:cs # Full CS review of current repo
/claude-phd-panel:cs owner/repo # Analyze a specific repo
/claude-phd-panel:db schema # Focus on schema design
/claude-phd-panel:stats metrics # Focus on metrics/experiments
Ask any PhD a direct question — they'll answer from their academic perspective, grounded in your actual codebase:
/claude-phd-panel:cs Is this sorting approach optimal for our data size?
/claude-phd-panel:db Should we denormalize this table for read performance?
/claude-phd-panel:stats Is our A/B test sample size sufficient?
/claude-phd-panel:ds Is there data leakage in our feature pipeline?
/claude-phd-panel:dist-sys Can this service handle network partitions?
/claude-phd-panel:pl Should we use generics here or concrete types?
Run PhD Panel commands alongside Claude C-Suite commands in the same session for executive + academic perspectives:
/claude-c-suite:cto # CTO flags DB performance concern
/claude-phd-panel:db # DB PhD validates with academic rigor
gh CLI to gather issues, milestones, and commit historyMIT