20 named specialist agents with Neal as chief of staff, agent teams coordination, vault persistence, and live dashboard
npx claudepluginhub braininahat/brains-in-a-hat --plugin brains-in-a-hatGet a session briefing — branch status, uncommitted changes, open issues, last session context.
Find dead code, unused imports, stale TODOs, inconsistent naming, and propose cleanup.
Save session state — decisions made, WIP items, workflow preferences. Writes to vault and local state.
Run a post-task retrospective — evaluates what went well, what was missed, proposes improvements.
Run a QA review of staged/modified changes before committing. Advisory only — reports findings without blocking.
Ship changes: fetch, branch, commit, push, create PR, and optionally merge. Usage: /ship [branch-name] [--merge]
Use this agent to review code changes for architectural violations — wrong package boundaries, broken API contracts, circular dependencies, leaked abstractions. Examples: <example> Context: User refactored code across multiple modules user: "Review this refactor" assistant: "I'll have the architect check the boundaries." <commentary> Architect reviews structural changes for separation of concerns and dependency direction. </commentary> </example> <example> Context: User moved code between packages user: "Does this architecture make sense?" assistant: "Let me get an architecture review." <commentary> Architect evaluates whether code is in the right place and interfaces are stable. </commentary> </example>
Use this agent when working with database schemas, migrations, config storage, or data format definitions. Ensures integrity and backward compatibility. Examples: <example> Context: User is modifying a database schema user: "Add a new column to the users table" assistant: "I'll have data-schema review the migration." <commentary> Data-schema ensures migrations are non-destructive, backward-compatible, and properly indexed. </commentary> </example> <example> Context: Data format needs to change user: "Change the session metadata format" assistant: "Let me get the data agent to check compatibility." <commentary> Data-schema verifies versioning, defaults for new fields, and no data loss on upgrade. </commentary> </example>
Use this agent when working with CI/CD pipelines, GitHub Actions, release processes, or version management. Examples: <example> Context: User is modifying CI workflows user: "Fix the failing GitHub Action" assistant: "I'll have DevOps look at the workflow." <commentary> DevOps reviews workflow YAML, action versions, secret handling, and build caching. </commentary> </example> <example> Context: User wants to set up automated releases user: "Set up automatic releases on tag push" assistant: "Let me get DevOps to design the release pipeline." <commentary> DevOps handles release automation, changelog generation, and artifact publishing. </commentary> </example>
Use this agent to maintain documentation — specs, CLAUDE.md, API docs, README, user-facing help. Detects staleness and keeps docs in sync with code. Examples: <example> Context: User added a new feature but docs are outdated user: "Update the docs for this change" assistant: "I'll have the docs writer update them." <commentary> Docs writer identifies which docs need updating and makes them current. </commentary> </example> <example> Context: Documentation audit needed user: "Are our docs up to date?" assistant: "I'll run a docs audit." <commentary> Docs writer scans for stale references, missing coverage, and spec drift. </commentary> </example>
Use this agent when changes touch domain-specific logic, terminology, workflows, or compliance requirements. Reads configuration from .brains_in_a_hat/domain-config.json. Examples: <example> Context: Code changes involve domain-specific business logic user: "Does this scoring logic match the clinical protocol?" assistant: "I'll have the domain expert validate." <commentary> Domain expert checks domain terminology, workflows, and compliance rules against the domain config. </commentary> </example> <example> Context: New feature touches regulated data handling user: "Make sure this meets compliance requirements" assistant: "Let me get a domain review." <commentary> Domain expert validates data handling against compliance rules in domain-config.json. </commentary> </example>
Use this agent when working with hardware connectivity — USB, serial, WiFi, Bluetooth devices, cameras (V4L2/DirectShow), or mobile device communication (ADB). Examples: <example> Context: User is debugging device connection issues user: "The device keeps disconnecting over WiFi" assistant: "I'll have the hardware specialist look at the connection handling." <commentary> Hardware-device reviews retry logic, keepalive timing, and reconnection strategies. </commentary> </example> <example> Context: User adding support for a new hardware device user: "Add support for this USB device" assistant: "Let me get the hardware agent to review the integration." <commentary> Hardware-device checks enumeration, cross-platform abstraction, and graceful disconnect handling. </commentary> </example>
Use this agent after completing a major task to run a retrospective. Evaluates what went well, what was missed, proposes improvements, and maintains CODEOWNERS. Examples: <example> Context: A feature implementation just finished user: "Let's do a retro on that" assistant: "I'll run a retrospective." <commentary> Meta-retro reviews agent effectiveness, identifies gaps, and proposes improvements. </commentary> </example> <example> Context: Neal notices repeated friction in the workflow user: "Something feels off about our process" assistant: "I'll have the retro agent analyze our recent workflow." <commentary> Meta-retro observes patterns and suggests DX improvements proactively. </commentary> </example>
Use this agent when working with ML model lifecycle — loading, inference, optimization, versioning, weight management. Supports ONNX, PyTorch, TensorFlow, and other frameworks. Examples: <example> Context: User is working on model inference code user: "Optimize the model loading time" assistant: "I'll have MLOps look at the inference pipeline." <commentary> MLOps reviews model loading, warmup, provider config, and memory usage. </commentary> </example> <example> Context: User adding a new ML model to the project user: "Add a new classification model" assistant: "Let me get MLOps to review the integration." <commentary> MLOps ensures proper model lifecycle: loading, warmup, session management, and cleanup. </commentary> </example>
Use this agent when working with application packaging, distribution, Docker, or bundling. Handles frozen builds, containers, and platform-specific installers. Examples: <example> Context: User is building a Docker image user: "Optimize the Docker build" assistant: "I'll have packaging review the Dockerfile." <commentary> Packaging reviews layer caching, multi-stage builds, and image size optimization. </commentary> </example> <example> Context: Application bundle is missing runtime dependencies user: "The packaged app crashes on startup" assistant: "Let me get packaging to check the bundle." <commentary> Packaging verifies all runtime deps are included and resources are accessible in frozen mode. </commentary> </example>
Use this agent to investigate performance issues — latency, memory usage, throughput, GPU utilization, thread contention. Examples: <example> Context: Application is running slowly user: "Why is this so slow?" assistant: "I'll have the profiler investigate." <commentary> Profiler checks for unbounded queues, main-thread blocking, memory growth, and GPU underutilization. </commentary> </example> <example> Context: Memory usage keeps growing user: "We have a memory leak" assistant: "Let me get the profiler to trace it." <commentary> Profiler reviews object lifecycles, buffer management, and resource cleanup. </commentary> </example>
Use this agent when working with project management — GitHub Issues, GitHub Projects, backlog grooming, sprint tracking, icebox management. Examples: <example> Context: User wants to track a bug found during QA user: "File an issue for that regression" assistant: "I'll have Parker create the issue." <commentary> Parker creates a labeled GitHub issue, links to related issues, and assigns to the right milestone. </commentary> </example> <example> Context: Backlog needs attention user: "Groom the backlog" assistant: "I'll have Parker review and prioritize." <commentary> Parker reviews open issues, closes stale ones, re-prioritizes, suggests merging duplicates. </commentary> </example> <example> Context: User wants to check sprint progress user: "How are we tracking on this milestone?" assistant: "Let me get Parker to check progress." <commentary> Parker reports milestone completion %, surfaces blockers, flags at-risk items. </commentary> </example>
Use this agent to validate changes before committing — runs tests, checks syntax, looks for regressions. Advisory only, never blocks. Examples: <example> Context: User has made code changes and wants to verify them user: "Run the tests before I commit" assistant: "I'll have QA review the changes." <commentary> QA engineer runs the test suite, checks syntax, and looks for regressions in modified code. </commentary> </example> <example> Context: User is about to commit and wants a quality check user: "Is this ready to ship?" assistant: "Let me run a QA check." <commentary> QA provides an advisory report — findings are informational, not blocking. </commentary> </example>
Use this agent when working with Qt, QML, or PySide6 code. Catches Qt-specific pitfalls in threading, signal/slot patterns, property bindings, and Loader behavior. Examples: <example> Context: User edited a QML file or Qt Python code user: "Review this QML component" assistant: "I'll have the Qt specialist review it." <commentary> Qt-QML agent checks for threading issues, signal/slot correctness, and QML best practices. </commentary> </example> <example> Context: Bug related to Qt threading or signals user: "QTimer keeps crashing when called from a thread" assistant: "Classic Qt threading issue. Let me get the Qt specialist." <commentary> Qt-QML agent knows QTimer must start/stop on its owner thread and suggests signal-based marshalling. </commentary> </example>
Use this agent to investigate technical decisions — compare libraries, evaluate approaches, read docs, benchmark alternatives. Also assesses novelty for potential patents/publications. Examples: <example> Context: Team needs to choose between two libraries user: "Should we use SQLAlchemy or raw SQL for this?" assistant: "I'll have the researcher compare them." <commentary> Researcher creates a comparison matrix with measurable criteria and recommends based on evidence. </commentary> </example> <example> Context: User wants to understand a new technology user: "Research how WebTransport works and if it fits our use case" assistant: "I'll get the researcher on it." <commentary> Researcher investigates docs, papers, and implementations, then provides a structured recommendation. </commentary> </example>
Use this agent to produce session briefings and persist session state. Handles issue triage during briefings. Examples: <example> Context: New Claude Code session just started user: "What's the status?" assistant: "I'll get a briefing ready." <commentary> Session-manager gathers git status, recent commits, open issues, and prior session context to produce a concise briefing. </commentary> </example> <example> Context: User is wrapping up work for the day user: "Let's wrap up" assistant: "I'll save our progress." <commentary> Session-manager persists decisions, WIP state, and writes vault notes for cross-session continuity. </commentary> </example> <example> Context: User wants to check on open issues user: "What issues need attention?" assistant: "I'll pull up the issue tracker." <commentary> Session-manager triages GitHub issues — prioritizes, links related, flags duplicates. </commentary> </example>
Use this agent when working with audio/video pipelines, streaming data, timestamp correlation, buffering, or recording/playback quality. Examples: <example> Context: User is working on audio recording or playback code user: "Fix the audio dropout during recording" assistant: "I'll have the signal processing specialist look at the pipeline." <commentary> Signal processing reviews sample rates, buffering strategy, and callback patterns. </commentary> </example> <example> Context: Timestamp synchronization issues across streams user: "The audio and video are out of sync" assistant: "Classic sync issue. Let me get the signal processing agent." <commentary> Signal processing checks clock sources, drift detection, and temporal alignment. </commentary> </example>
Use this agent to design new features or subsystems before code is written. Evaluates approaches, produces blueprints with interfaces and tradeoffs. Examples: <example> Context: User wants to add a new feature user: "Design an auth system for this app" assistant: "I'll have the system designer draft a blueprint." <commentary> System designer explores existing patterns, proposes 2-3 approaches with tradeoffs, and recommends one. </commentary> </example> <example> Context: User needs to plan an integration user: "How should we integrate with the payment API?" assistant: "Let me get a design proposal." <commentary> System designer defines interfaces, data flow, and dependencies before implementation begins. </commentary> </example>
Use this agent to design test suites, identify coverage gaps, and write test plans. Thinks about WHAT to test and WHY. Examples: <example> Context: New feature needs a test plan user: "What tests do we need for this feature?" assistant: "I'll have testing-strategy design a test plan." <commentary> Testing-strategy identifies critical paths, edge cases, and coverage gaps for the feature. </commentary> </example> <example> Context: Test coverage feels incomplete user: "Where are our testing gaps?" assistant: "I'll get a coverage analysis." <commentary> Testing-strategy audits the test suite for missing coverage, flaky tests, and test quality. </commentary> </example>
Use this agent to audit visual consistency, layout, theming, and responsive behavior in UI code. Works with any UI framework. Examples: <example> Context: User modified UI components user: "Review the UI changes" assistant: "I'll have the UI reviewer check them." <commentary> UI reviewer checks z-ordering, alignment, theme compliance, responsive layout, and hover states. </commentary> </example> <example> Context: UI looks misaligned or inconsistent user: "Something looks off in the UI" assistant: "Let me get a visual review." <commentary> UI reviewer audits visual consistency against project theme and design conventions. </commentary> </example>
Use this agent to review end-to-end user flows. Catches UX friction, missing states, confusing transitions, and dead ends. Examples: <example> Context: User added a new multi-step workflow user: "Review the onboarding flow" assistant: "I'll have the UX agent check the user journey." <commentary> UX-workflow checks state machine completeness, error recovery, loading states, and discoverability. </commentary> </example> <example> Context: Users are getting confused by a feature user: "Users keep getting stuck on the settings page" assistant: "Let me get a UX flow analysis." <commentary> UX-workflow audits the flow for dead ends, missing back navigation, and unclear transitions. </commentary> </example>
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.