Development toolkit for GUI automation agents with /dev command (6-step workflow), 8 agents, and code review skill
npx claudepluginhub zlyv587/marketplace --plugin gui-agent-devUse this agent when adding new action types to the GUI agent, extending InputController capabilities, or designing new automation actions. Examples: <example> Context: User wants to add drag-and-drop support user: "I need to add a drag action for drag-and-drop operations" assistant: "I'll use the action-designer agent to help design the drag action with proper schema and implementation." <commentary> Adding new action types requires careful schema design and InputController integration. </commentary> </example> <example> Context: User wants to handle a specific UI pattern user: "How can I make the agent handle dropdown menus better?" assistant: "Let me use the action-designer agent to analyze if we need new actions or can improve existing ones for dropdown handling." <commentary> UI pattern handling may require new actions or combinations of existing ones. </commentary> </example> <example> Context: User is extending the ActionType enum user: "I'm adding a new action type, what do I need to consider?" assistant: "I'll help you design the complete action with the action-designer agent - schema, implementation, and prompt updates." <commentary> New action types need coordinated changes across multiple components. </commentary> </example>
Designs feature architectures by analyzing existing codebase patterns and conventions, then providing 2-3 solution options with implementation blueprints for user selection
Deeply analyzes existing codebase features by tracing execution paths, mapping architecture layers, understanding patterns and abstractions, and documenting dependencies to inform new development
Reviews code for bugs, logic errors, security vulnerabilities, code quality issues, and adherence to project conventions, using confidence-based filtering to report only high-priority issues
Use this agent when working on context management, optimizing token usage, improving history compression, or debugging KV-cache efficiency. Examples: <example> Context: User is concerned about token costs or context window limits user: "The agent is using too many tokens, tasks are getting expensive" assistant: "I'll use the context-optimizer agent to analyze your token usage and suggest optimizations." <commentary> Token optimization is a core concern for long-running GUI automation tasks. </commentary> </example> <example> Context: User is working on context_manager.py user: "How can I make the compression more efficient?" assistant: "Let me analyze your context management with the context-optimizer agent to identify compression improvements." <commentary> Direct work on context management code benefits from specialized optimization analysis. </commentary> </example> <example> Context: User notices the agent forgetting earlier actions user: "The agent seems to forget what it did earlier in long tasks" assistant: "I'll use the context-optimizer agent to analyze how history is being managed and compressed." <commentary> Memory/history issues are context management problems requiring specialized analysis. </commentary> </example>
Use this agent when debugging failed GUI automation tasks, analyzing why actions failed, or troubleshooting coordinate/timing issues. Examples: <example> Context: User ran the agent and it failed to complete a task user: "The agent clicked on the wrong button and then got stuck" assistant: "I'll use the plan-debugger agent to analyze what went wrong and identify the root cause." <commentary> Plan debugging is needed when execution fails - this agent traces through the failure. </commentary> </example> <example> Context: User is looking at screenshots in the screenshots/ directory user: "Why did it click at 0.3, 0.5 when the button is clearly at 0.7, 0.2?" assistant: "Let me use the plan-debugger agent to analyze the coordinate mismatch and identify why the LLM estimated incorrectly." <commentary> Coordinate debugging requires understanding the grid system and LLM vision analysis. </commentary> </example> <example> Context: User mentions the agent took too many iterations user: "It kept repeating the same action over and over" assistant: "I'll analyze this with the plan-debugger agent to understand why the agent didn't recognize the action succeeded." <commentary> Loop detection and action verification issues are core debugging scenarios. </commentary> </example>
Use this agent when working on LLM system prompts, improving GUI agent instructions, or optimizing how the agent communicates with Gemini/Claude. Examples: <example> Context: User is working on agent.py and mentions the LLM isn't understanding coordinates well user: "The agent keeps clicking in the wrong places, I think the prompt needs improvement" assistant: "I'll use the prompt-engineer agent to analyze your system prompt and suggest improvements for coordinate understanding." <commentary> The prompt-engineer agent specializes in optimizing LLM instructions for GUI automation tasks. </commentary> </example> <example> Context: User wants to add a new capability to the agent user: "I want the agent to better understand when a task is complete" assistant: "Let me use the prompt-engineer agent to help design improved task completion detection in your system prompts." <commentary> Prompt engineering is needed to help the LLM better recognize task completion states. </commentary> </example> <example> Context: User is reviewing SYSTEM_PROMPT or PLANNER_SYSTEM constants user: "Can you help me improve this system prompt?" assistant: "I'll analyze your system prompt with the prompt-engineer agent to identify improvements." <commentary> Direct request for prompt optimization - core use case for this agent. </commentary> </example>
Use this agent when analyzing screenshots to debug coordinate issues, understand element positioning, or improve the grid overlay system. Examples: <example> Context: User is looking at a screenshot with grid overlay user: "Can you help me understand why the coordinates are off in this screenshot?" assistant: "I'll use the screenshot-analyzer agent to examine the screenshot and grid overlay alignment." <commentary> Screenshot analysis requires understanding the coordinate grid and element positioning. </commentary> </example> <example> Context: User is debugging VisionAnalyzer results user: "The element location returned (450, 300) but it should be around (600, 200)" assistant: "Let me analyze this with the screenshot-analyzer agent to understand the coordinate discrepancy." <commentary> Coordinate mismatch debugging requires visual analysis of the screenshot. </commentary> </example> <example> Context: User wants to improve element detection user: "How can I help the LLM better identify this button?" assistant: "I'll use the screenshot-analyzer agent to examine how the element appears and suggest improvements." <commentary> Element detection improvement requires understanding what the LLM sees. </commentary> </example>
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
LLM application development, prompt engineering, and AI assistant optimization
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.