OpenAgentsControl — multi-agent orchestration for Claude Code. Context-aware development with skills, subagents, parallel execution, and automated code review.
npx claudepluginhub darrenhinde/openagentscontrol --plugin oacUse before implementing anything — discovers context and proposes a plan for approval before writing code.
Use when encountering any bug, test failure, or unexpected behavior — before proposing any fixes.
$ARGUMENTS
Clean up old temporary files from .tmp directory
Show OAC workflow overview and available skills
Show OAC plugin status, installed context, and available skills
Review code for security vulnerabilities, correctness, and quality. Use after implementation is complete and before committing. Examples: <example> Context: coder-agent has finished implementing a new auth service. user: "The auth service is done, can you check it?" assistant: "I'll run the code-review skill to have code-reviewer validate it before we commit." <commentary>Implementation is complete — code-reviewer validates before commit.</commentary> </example> <example> Context: User is about to merge a PR with database query changes. user: "Review src/db/queries.ts before I merge" assistant: "Using code-reviewer to check for SQL injection and correctness issues." <commentary>Explicit review request on specific files — code-reviewer is the right agent.</commentary> </example>
Execute a single coding subtask from a JSON task file. Use when a subtask_NN.json file exists with acceptance criteria and deliverables. Examples: <example> Context: The task-manager has created subtask_01.json for a JWT service. user: "Implement the JWT service subtask" assistant: "I'll delegate this to the coder-agent with the subtask JSON." <commentary>A subtask JSON file exists with clear criteria — coder-agent is the right choice.</commentary> </example> <example> Context: User asks to fix a bug in auth middleware. user: "Fix the token expiry bug in auth.middleware.ts" assistant: "Let me use the code-execution skill to handle this via coder-agent." <commentary>A concrete implementation task with a specific file — coder-agent executes it.</commentary> </example>
Manages context files, discovers context roots, validates structure, and organizes project context
Discover relevant context files, coding standards, and project conventions. Use before implementation begins to find the right standards to follow. Examples: <example> Context: User wants to build a new authentication feature. user: "Build me a JWT authentication system" assistant: "Before implementing, I'll use context-scout to find the security and auth standards for this project." <commentary>New feature starting — context-scout finds the relevant standards first.</commentary> </example> <example> Context: coder-agent needs to know the project's TypeScript conventions. user: "What TypeScript patterns should I follow here?" assistant: "Let me use context-scout to discover the TypeScript standards in this project's context." <commentary>Standards needed before coding — context-scout navigates the context system to find them.</commentary> </example>
Fetches external library and framework documentation from Context7 API and other sources, caching results for offline use
Break down complex features into atomic, verifiable subtasks with dependency tracking and JSON-based progress management
Test authoring and TDD specialist - writes comprehensive tests following project testing standards
Use when a subtask is ready to implement and has a subtask JSON file with acceptance criteria and deliverables.
Use when code has been written and needs validation before committing, or when the user asks for a code review or security check.
Use when coding standards, security patterns, or project conventions need to be discovered before implementation begins.
Install context files from registry. Use when user runs /install-context, says "install context", "setup context", or when context is missing and the user needs to get started.
Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
Use when the task involves an external library or package and current API docs are needed before writing code.
Use before any implementation — understands the request, discovers project context, and proposes a concise plan for user approval before writing any code.
Use when multiple subtasks have no shared files or dependencies and can be executed simultaneously.
Use when a feature touches 4 or more files, involves multiple components, or has subtasks that could run in parallel.
Use when the user asks for tests, mentions TDD, or when new code has been written and needs test coverage.
Use when starting any conversation — establishes how to find and use OAC skills, requiring Skill tool invocation BEFORE ANY response including clarifying questions, this is your secret weapon to best perform your tasks
Use when about to claim work is complete, fixed, or passing, before committing or creating PRs — requires running verification commands and confirming output before making any success claims; evidence before assertions always
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
AI-powered development tools for code review, research, design, and workflow automation.
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams