By howells
Orchestrates full development lifecycle from ideation to production: brainstorm visions, design visual identities and UIs, plan implementations with TDD cycles, execute via specialized agents, run audits/reviews for quality/security/performance/SEO, generate/fix tests, commit atomically, and validate deployment readiness—all context-aware to your project's stack and git state.
npx claudepluginhub howells/arc --plugin arcBrowse a web app through an expert persona — evaluate the rendered experience with designer or first-time-user eyes.
Smart commit and push with auto-splitting across domains.
Dependency audit, alternative discovery, and batch upgrades.
Create distinctive, non-generic UI designs with wireframes.
Capture solved problems as searchable documentation.
The main entry point. Understands your codebase and asks what you want to do.
Show all Arc commands with context-aware relevance annotations.
Install Claude Code hooks for automatic formatting, linting, and context monitoring.
Turn ideas into validated designs through collaborative dialogue with expert review.
Scope-aware planning and implementation with TDD.
Production readiness checklist for shipping.
Generate and validate project name candidates with domain and GitHub checks.
Kill orphaned Claude subagent processes that didn't exit cleanly.
Audit and fix responsive/mobile issues across every page with visual verification.
Run expert review on a plan with parallel reviewer agents.
Apply Arc's coding rules to the current project.
Deep SEO audit for web projects.
Opinionated recommendations for what to work on next.
Test strategy and execution. Create test plans, run suites, or fix failing tests.
Clean up completed plans in docs/arc/plans/.
Create or review a high-level vision document for your project.
AI SDK guidance for building AI-powered features.
Run mechanical verification and comprehensive codebase audit.
Create a visual identity system — palette, typography, tone, and generated assets.
Structural pre-validation of implementation plans across 7 dimensions before execution starts
Quick code quality check per task. Verifies implementation is well-built — not just spec-compliant. Runs after spec-reviewer, before commit. Fast gate check, not deep review. <example> Context: Task just passed spec-reviewer. user: "Quick code quality check before commit" assistant: "I'll dispatch code-reviewer for a fast quality gate" <commentary> Spec says WHAT to build, code-reviewer checks HOW it's built. Quick pass/fail. </commentary> </example>
Use when a test fails unexpectedly during implementation. Investigates root cause systematically, distinguishes between test bugs and implementation bugs, and applies minimal fixes. Prefers event-based solutions over timeout increases. <example> Context: Test fails with timing-related error during implementation. user: "Test 'should complete batch' is failing with timeout" assistant: "I'll use the debugger to investigate the root cause" <commentary> Timing failures need systematic investigation, not timeout increases. Debugger will trace the issue. </commentary> </example> <example> Context: Multiple tests failing after a refactor. user: "3 tests in user-service.test.ts are now failing" assistant: "Let me dispatch the debugger to investigate these failures" <commentary> Concentrated failures in one file suggest a common root cause. Debugger will identify it. </commentary> </example>
Makes design decisions during implementation when specs are incomplete. Creates visual direction, chooses typography/colors, designs empty states, loading states, error states. Outputs actionable specs for ui-builder to implement. <example> Context: Implementation needs an empty state but no design exists. user: "Design the empty state for the dashboard" assistant: "I'll dispatch designer to create a spec for this empty state" <commentary> No Figma or design doc for this. Designer makes the call and outputs a spec. </commentary> </example> <example> Context: Need to add a feature with no existing design. user: "How should the notification dropdown look?" assistant: "Let me get designer to create visual direction for this" <commentary> Designer creates specs on the fly when design docs don't cover something. </commentary> </example>
Runs and fixes E2E tests (Playwright). Handles flaky tests, timing issues, and selector problems. Iterates until green or reports blockers. Keeps verbose output contained. <example> Context: E2E tests created as part of implementation. user: "Run the e2e tests for checkout flow" assistant: "I'll dispatch e2e-runner to run and fix any issues" <commentary> E2E tests produce verbose output. e2e-runner handles iteration and reports summary. </commentary> </example> <example> Context: E2E tests failing after UI changes. user: "E2E tests are broken after the redesign" assistant: "Let e2e-runner investigate and fix the selector issues" <commentary> UI changes often break selectors. e2e-runner will update them systematically. </commentary> </example>
Writes E2E tests with Playwright. Tests complete user journeys in real browsers — signup flows, checkout processes, authentication. Includes auth setup for Clerk and WorkOS. <example> Context: New feature needs end-to-end coverage. user: "Write E2E tests for the onboarding flow" assistant: "I'll dispatch e2e-test-writer to create Playwright tests for the full journey" <commentary> Multi-page user journey = E2E test. Real browser, real interactions. </commentary> </example>
Implements UI components from Figma designs. Use when the user provides a Figma URL or asks to build something from a design. The agent extracts design specifications via Figma MCP and generates production-ready code that respects the codebase's existing design system. Examples: - <example> Context: User shares a Figma link for a new component. user: "Implement this card component: [Figma URL]" assistant: "I'll use the figma-builder agent to build this component." </example> - <example> Context: User wants to add a new section matching a design. user: "Add the pricing section from this Figma file" assistant: "Let me implement that pricing section from the Figma design." </example>
Fast, focused agent for build errors, TypeScript errors, and lint issues. Fixes the immediate problem without refactoring. Handles tsc, biome, import resolution, config issues, and dependency conflicts. Verifies, moves on. Use for mechanical cleanup between implementation steps. <example> Context: TypeScript errors after implementing a feature. user: "Fix the TypeScript errors" assistant: "I'll dispatch fixer to clean these up" <commentary> TypeScript errors are mechanical — fixer handles them quickly without over-engineering. </commentary> </example> <example> Context: Lint issues blocking commit. user: "Biome is complaining about formatting" assistant: "Let fixer handle the lint cleanup" <commentary> Lint issues are mechanical fixes. Fixer applies them without expanding scope. </commentary> </example>
General-purpose implementation agent for executing plan tasks. Follows TDD, commits atomically, and self-reviews before completion. Use for non-specialized tasks (utilities, services, APIs, etc.). <example> Context: Plan has a task to create a utility function. user: "Implement the date formatting utility" assistant: "I'll dispatch the implementer to build this utility with tests" <commentary> Utility functions don't need specialized agents like ui-builder. Implementer handles general tasks. </commentary> </example> <example> Context: Plan has a task to create an API endpoint. user: "Implement the /api/users endpoint" assistant: "Let implementer build this endpoint following TDD" <commentary> API work is general implementation. Implementer follows TDD and project conventions. </commentary> </example>
Writes integration tests with vitest. Tests multiple components working together, API interactions (with MSW mocking), database operations, and authentication flows. More realistic than unit tests. <example> Context: Testing a form that submits to an API. user: "Write integration tests for the signup form" assistant: "I'll dispatch integration-test-writer to test the full form flow with API mocking" <commentary> Form + API = integration test. Uses MSW to mock the API, tests the full component behavior. </commentary> </example>
Verify the entire implementation matches the original plan. Compares plan tasks against actual implementation — catches skipped tasks, partial implementations, and scope creep at the whole-feature level. Runs after all tasks complete, before shipping. <example> Context: All tasks marked complete in an implementation plan. user: "Verify the implementation matches the plan" assistant: "I'll dispatch plan-completion-reviewer to compare the plan against what was built" <commentary> This is the final gate before shipping — ensures nothing was skipped, partially implemented, or added beyond the plan's scope. </commentary> </example>
Quick spec compliance check. Verifies implementation matches the task specification exactly — nothing missing, nothing extra. Run after implementation, before code quality review. <example> Context: Implementer just finished a task. user: "Check if this matches the spec" assistant: "I'll dispatch spec-reviewer for a quick compliance check" <commentary> Spec review comes before code quality review. Catches over/under-building early. </commentary> </example>
Runs vitest test suites and analyzes results. Handles unit and integration test execution, identifies failure patterns, and provides actionable summaries. For E2E/Playwright, use e2e-runner. <example> Context: Need to run the test suite after implementation. user: "Run the tests and tell me what's failing" assistant: "I'll dispatch test-runner to execute vitest and analyze results" <commentary> Unit/integration tests with vitest. Fast feedback, clear failure analysis. </commentary> </example>
Use when building UI components from a design spec or Figma. Creates memorable, distinctive interfaces — not generic AI slop. Loads design rules and applies aesthetic direction with intention. Complements the designer reviewer (which critiques) by actually building. <example> Context: Implementation plan includes UI component tasks. user: "Build the pricing cards from the Figma" assistant: "I'll use ui-builder to create these with the design system" <commentary> UI building needs design awareness. ui-builder will load aesthetic direction and build intentionally. </commentary> </example> <example> Context: Creating a new page with specific aesthetic requirements. user: "Build the landing page hero section" assistant: "Let me dispatch ui-builder with the aesthetic direction from the design doc" <commentary> Hero sections are prime candidates for generic AI aesthetics. ui-builder will ensure distinctiveness. </commentary> </example>
Writes unit tests with vitest. Tests pure functions, utilities, hooks, and component rendering in isolation. Focuses on behavior, not implementation. Fast, isolated, no external dependencies. <example> Context: New utility function needs tests. user: "Write unit tests for the formatCurrency function" assistant: "I'll dispatch unit-test-writer for isolated function testing" <commentary> Pure function = unit test territory. Fast, isolated tests with vitest. </commentary> </example>
Use this agent when you need to gather comprehensive documentation and best practices for frameworks, libraries, or dependencies in your project. This includes fetching official documentation, exploring source code, identifying version-specific constraints, and understanding implementation patterns. <example>Context: The user needs to understand how to properly implement a new feature using a specific library. user: "I need to implement file uploads using Next.js App Router" assistant: "I'll use the docs-researcher agent to gather comprehensive documentation about Next.js file uploads" <commentary>Since the user needs to understand a framework/library feature, use the docs-researcher agent to collect all relevant documentation and best practices.</commentary></example> <example>Context: The user is troubleshooting an issue with a package. user: "Why is the @tanstack/query package not working as expected?" assistant: "Let me use the docs-researcher agent to investigate the TanStack Query documentation and source code" <commentary>The user needs to understand library behavior, so the docs-researcher agent should be used to gather documentation and explore the package's source.</commentary></example>
Use this agent when you need to understand the historical context and evolution of code changes, trace the origins of specific code patterns, identify key contributors and their expertise areas, or analyze patterns in commit history. This agent excels at archaeological analysis of git repositories to provide insights about code evolution and development patterns. <example>Context: The user wants to understand the history and evolution of recently modified files. user: "I've just refactored the authentication module. Can you analyze the historical context?" assistant: "I'll use the git-history-analyzer agent to examine the evolution of the authentication module files." <commentary>Since the user wants historical context about code changes, use the git-history-analyzer agent to trace file evolution, identify contributors, and extract patterns from the git history.</commentary></example> <example>Context: The user needs to understand why certain code patterns exist. user: "Why does this payment processing code have so many try-catch blocks?" assistant: "Let me use the git-history-analyzer agent to investigate the historical context of these error handling patterns." <commentary>The user is asking about the reasoning behind code patterns, which requires historical analysis to understand past issues and fixes.</commentary></example>
Generate and validate project name candidates. Reads codebase context, produces candidates using tech naming strategies, and checks domain + GitHub availability.
Use this agent to review UI implementations for accessibility compliance. Checks WCAG 2.1 AA conformance, keyboard navigation, screen reader compatibility, color contrast, motion preferences, form accessibility, and semantic HTML usage. <example> Context: User has built a new form component. user: "Review the signup form for accessibility" assistant: "I'll use the accessibility-engineer to check WCAG compliance and keyboard navigation" <commentary> Forms are a critical accessibility surface — labels, error announcements, fieldset grouping, and keyboard flow all need review. </commentary> </example> <example> Context: User has implemented a new interactive component. user: "Is this dropdown accessible?" assistant: "Let me have the accessibility-engineer check keyboard navigation, ARIA roles, and screen reader support" <commentary> Custom interactive components frequently break accessibility. The accessibility-engineer checks the full interaction model. </commentary> </example>
Use this agent when you need to analyze code changes from an architectural perspective, evaluate system design decisions, or ensure that modifications align with established architectural patterns. This includes reviewing pull requests for architectural compliance, assessing the impact of new features on system structure, or validating that changes maintain proper component boundaries and design principles. <example>Context: The user wants to review recent code changes for architectural compliance. user: "I just refactored the authentication service to use a new pattern" assistant: "I'll use the architecture-engineer agent to review these changes from an architectural perspective" <commentary>Since the user has made structural changes to a service, use the architecture-engineer agent to ensure the refactoring aligns with system architecture.</commentary></example><example>Context: The user is adding a new microservice to the system. user: "I've added a new notification service that integrates with our existing services" assistant: "Let me analyze this with the architecture-engineer agent to ensure it fits properly within our system architecture" <commentary>New service additions require architectural review to verify proper boundaries and integration patterns.</commentary></example>
Use this agent for frontend/UI code reviews. Strict on: type safety (no `any`, no casts), UI completeness (loading/error/empty states), React patterns (React Query not useEffect for data fetching). Confidence-scored findings — only reports issues with ≥80% confidence. Prefer over senior-engineer when reviewing React components, forms, or UI flows. <example> Context: User has implemented a new feature in a TypeScript React project. user: "Review my new checkout component" assistant: "Let me have daniel-product-engineer check this implementation" <commentary> TypeScript/React code gets daniel-product-engineer for strict type safety and UI completeness checks. </commentary> </example> <example> Context: User has built a form with validation. user: "Review the signup form I just created" assistant: "I'll use daniel-product-engineer to check this form implementation" <commentary> Forms and UI flows are daniel-product-engineer's specialty — checking for complete states, proper validation patterns, and type safety. </commentary> </example>
Use this agent when you need to review database migrations, data models, or any code that manipulates persistent data. This includes checking migration safety, validating data constraints, ensuring transaction boundaries are correct, and verifying that referential integrity and privacy requirements are maintained. <example>Context: The user has just written a database migration that adds a new column and updates existing records. user: "I've created a migration to add a status column to the orders table" assistant: "I'll use the data-engineer agent to review this migration for safety and data integrity concerns" <commentary>Since the user has created a database migration, use the data-engineer agent to ensure the migration is safe, handles existing data properly, and maintains referential integrity.</commentary></example> <example>Context: The user has implemented a service that transfers data between models. user: "Here's my new service that moves user data from the legacy_users table to the new users table" assistant: "Let me have the data-engineer agent review this data transfer service" <commentary>Since this involves moving data between tables, the data-engineer should review transaction boundaries, data validation, and integrity preservation.</commentary></example>
Use this agent to evaluate a rendered web app from the perspective of someone about to record a product demo. Finds the best narrative path through the app, identifies "wow moments," flags dead air risks (loading screens, empty states, error edges), and assesses whether flows complete smoothly on camera. Outputs a recommended demo script. <example> Context: User is preparing to record a product walkthrough. user: "I need to record a demo of this app — what's the best flow to show?" assistant: "I'll use the demo-presenter to find the strongest narrative path and flag any rough edges" <commentary> The user needs a demo script, which requires evaluating the product through the lens of what tells the best story on camera. </commentary> </example> <example> Context: User wants to know if their app is demo-ready. user: "Is this ready to show to investors?" assistant: "Let me have the demo-presenter evaluate demo-readiness and find any on-camera risks" <commentary> Demo-readiness requires checking for rough edges that are invisible in normal use but glaring on camera — empty states, slow loads, ugly error handling. </commentary> </example>
Use this agent to review UI implementations for visual design quality and UX. Evaluates aesthetic distinctiveness, catches "AI slop" patterns, and checks UX fundamentals — hierarchy, spacing, color, typography, layout, motion, and interaction patterns. Complements daniel-product-engineer (code quality) and accessibility-engineer (a11y compliance) by focusing on visual and experiential quality. <example> Context: User has implemented a new landing page. user: "Review the design of my new landing page" assistant: "Let me have the designer check this for visual distinctiveness" <commentary> Landing pages are prime candidates for generic AI aesthetics. The designer will check for memorable elements and intentional design choices. </commentary> </example> <example> Context: User has built UI components and wants design feedback. user: "Does this UI look generic?" assistant: "I'll use the designer to evaluate the aesthetic quality" <commentary> The user is specifically concerned about generic aesthetics, which is exactly what this reviewer specializes in. </commentary> </example> <example> Context: User has built a form and wants UX feedback. user: "Is this form well-designed?" assistant: "I'll use the designer to evaluate the form's UX and visual design" <commentary> Forms have both visual design and UX concerns — hierarchy, spacing, validation patterns, input sizing. The designer covers both. </commentary> </example>
Use this agent to evaluate a rendered web app from the perspective of someone who has never seen it before. Evaluates discoverability, clarity, cognitive load, error recovery, progressive disclosure, and terminology. Used by /arc:browse as a persona for browser-based experience evaluation. <example> Context: User wants to evaluate onboarding clarity of their app. user: "Would a new user understand what to do on this page?" assistant: "I'll use the first-time-user agent to evaluate discoverability and clarity" <commentary> The user is asking about first-time experience, which is exactly this agent's lens — evaluating whether someone with no context can orient and take action. </commentary> </example> <example> Context: User has built a new feature and wants to check if it's intuitive. user: "Is this feature discoverable without a tutorial?" assistant: "Let me have the first-time-user evaluate whether someone could find and use this without guidance" <commentary> Discoverability without prior knowledge is the core of this persona's evaluation criteria. </commentary> </example>
Use this agent when you need an opinionated Next.js code review from the perspective of Lee Robinson and the Vercel/Next.js team. This agent excels at identifying React SPA patterns that don't belong in Next.js, misuse of client components, and missed opportunities for server-first architecture. Perfect for reviewing Next.js code where you want uncompromising feedback on modern App Router best practices. <example> Context: The user wants to review a recently implemented Next.js feature. user: "I just implemented data fetching using useEffect and useState in my dashboard" assistant: "I'll use the Lee Next.js reviewer to evaluate this implementation" <commentary> Since the user is using client-side data fetching patterns when Server Components would likely work better, the lee-nextjs-engineer should analyze this critically. </commentary> </example> <example> Context: The user is planning a new Next.js feature and wants feedback. user: "I'm thinking of adding Redux for state management in our Next.js app" assistant: "Let me invoke the Lee Next.js reviewer to analyze this architectural decision" <commentary> Adding Redux to a Next.js app often indicates SPA thinking; the lee-nextjs-engineer should scrutinize whether server state would suffice. </commentary> </example> <example> Context: The user has created API routes for form handling. user: "I've set up API routes and client-side fetch for all my form submissions" assistant: "I'll use the Lee Next.js reviewer to review this approach" <commentary> API routes + client fetch for forms is often unnecessary when Server Actions exist, making this perfect for lee-nextjs-engineer analysis. </commentary> </example>
Use this agent when you need to analyze code for performance issues, optimize algorithms, identify bottlenecks, or ensure scalability. This includes reviewing database queries, memory usage, caching strategies, and overall system performance. The agent should be invoked after implementing features or when performance concerns arise. <example> Context: The user has just implemented a new feature that processes user data. user: "I've implemented the user analytics feature. Can you check if it will scale?" assistant: "I'll use the performance-engineer agent to analyze the scalability and performance characteristics of your implementation." <commentary> Since the user is concerned about scalability, use the Task tool to launch the performance-engineer agent to analyze the code for performance issues. </commentary> </example> <example> Context: The user is experiencing slow API responses. user: "The API endpoint for fetching reports is taking over 2 seconds to respond" assistant: "Let me invoke the performance-engineer agent to identify the performance bottlenecks in your API endpoint." <commentary> The user has a performance issue, so use the performance-engineer agent to analyze and identify bottlenecks. </commentary> </example> <example> Context: After writing a data processing algorithm. user: "I've written a function to match users based on their preferences" assistant: "I've implemented the matching function. Now let me use the performance-engineer agent to ensure it will scale efficiently." <commentary> After implementing an algorithm, proactively use the performance-engineer agent to verify its performance characteristics. </commentary> </example>
Use this agent when you need to perform security audits, vulnerability assessments, or security reviews of code. This includes checking for common security vulnerabilities, validating input handling, reviewing authentication/authorization implementations, scanning for hardcoded secrets, and ensuring OWASP compliance. <example>Context: The user wants to ensure their newly implemented API endpoints are secure before deployment. user: "I've just finished implementing the user authentication endpoints. Can you check them for security issues?" assistant: "I'll use the security-engineer agent to perform a comprehensive security review of your authentication endpoints." <commentary>Since the user is asking for a security review of authentication code, use the security-engineer agent to scan for vulnerabilities and ensure secure implementation.</commentary></example> <example>Context: The user is concerned about potential SQL injection vulnerabilities in their database queries. user: "I'm worried about SQL injection in our search functionality. Can you review it?" assistant: "Let me launch the security-engineer agent to analyze your search functionality for SQL injection vulnerabilities and other security concerns." <commentary>The user explicitly wants a security review focused on SQL injection, which is a core responsibility of the security-engineer agent.</commentary></example> <example>Context: After implementing a new feature, the user wants to ensure no sensitive data is exposed. user: "I've added the payment processing module. Please check if any sensitive data might be exposed." assistant: "I'll deploy the security-engineer agent to scan for sensitive data exposure and other security vulnerabilities in your payment processing module." <commentary>Payment processing involves sensitive data, making this a perfect use case for the security-engineer agent to identify potential data exposure risks.</commentary></example>
Use this agent when you need a thorough code review with asymmetric strictness — strict on changes to existing code, pragmatic on new isolated code. This agent focuses on review process discipline: verifying deletions are intentional, checking testability as a quality signal, and preferring simple duplication over clever abstractions. <example> Context: The user has modified an existing component. user: "I've updated the UserProfile component to add settings" assistant: "Let me have the senior reviewer check these changes to existing code" <commentary> Changes to existing code get stricter review — the senior-engineer will question whether this adds complexity and whether extraction would be better. </commentary> </example> <example> Context: The user has created new isolated code. user: "I've created a new NotificationBanner component" assistant: "I'll have the senior reviewer check this new component" <commentary> New isolated code gets pragmatic review — if it works and is testable, it's acceptable. </commentary> </example> <example> Context: The user has refactored and removed some code. user: "I've refactored the auth flow and cleaned up some old code" assistant: "Let me have the senior reviewer verify the deletions and check for regressions" <commentary> Deletions need explicit verification — was this intentional? What might break? </commentary> </example>
Review a design/spec document for completeness, scope discipline, architecture clarity, and YAGNI
Use this agent to review web projects for SEO compliance. Checks all vitals from rules/seo.md — meta tags, heading hierarchy, Open Graph, robots.txt, sitemap, structured data, and page classification (marketing vs app). Flags missing or broken SEO elements that would hurt indexing or social sharing. <example> Context: User is preparing to launch a marketing site. user: "Check if my site is ready for SEO" assistant: "I'll use the seo-engineer to check all SEO vitals across your marketing pages" <commentary> Pre-launch SEO review catches missing meta tags, broken sitemaps, and noindex leftovers before they affect indexing. </commentary> </example> <example> Context: User has added new pages to their site. user: "I added a blog section, is the SEO set up correctly?" assistant: "Let me have the seo-engineer audit the blog pages for SEO compliance" <commentary> New page sections often miss meta descriptions, structured data, or OG images that existing pages have. </commentary> </example>
Use this agent to evaluate a rendered web app from a product strategy perspective. Asks whether the product is solving the right problem, whether the UI communicates its value clearly, whether features earn their screen space, and whether the flow converts intent into action. Critically reasoned — not wishlists or feature requests. <example> Context: User wants to know if their product page communicates value. user: "Is this landing page actually convincing anyone to sign up?" assistant: "I'll use the strategist to evaluate the product's value proposition and conversion flow" <commentary> The user is questioning whether the product communicates its value, which is exactly this persona's lens — evaluating positioning, not aesthetics. </commentary> </example> <example> Context: User has a feature-heavy app and suspects bloat. user: "Does this app feel focused or is it trying to do too much?" assistant: "Let me have the strategist evaluate feature prioritization and surface area" <commentary> Feature sprawl vs. focus is a product strategy question. The strategist finds features that exist because someone built them, not because users need them. </commentary> </example>
Use this agent to review test quality — assertion meaningfulness, test isolation, flaky patterns, coverage gaps, and mock hygiene. Complements test-runner agents by evaluating whether tests are actually catching bugs, not just passing. <example> Context: User has written tests and wants quality feedback. user: "Are my tests actually testing anything useful?" assistant: "I'll use the test-quality-engineer to evaluate assertion quality and test structure" <commentary> The user is questioning test value — this agent checks whether assertions verify meaningful behavior rather than just running without errors. </commentary> </example> <example> Context: Tests are passing but bugs keep shipping. user: "Our tests pass but we keep finding bugs in production" assistant: "Let me have the test-quality-engineer analyze your test suite for coverage gaps and weak assertions" <commentary> Passing tests that miss bugs indicate assertion quality or coverage problems — exactly what this agent diagnoses. </commentary> </example>
Use this agent to generate documentation for a specific section of a codebase. Spawned by /arc:document in full-site mode to parallelize documentation generation. <example> Context: Full-site documentation generation needs to document 8 sections in parallel. user: "Generate docs for the entire project" assistant: "I'll spawn docs-writer agents to document each section" <commentary> Each agent gets a focused section assignment with clear file boundaries. </commentary> </example>
Use this agent to run e2e tests (Playwright, Cypress) and fix failures. Spawns as separate agent to avoid filling main context with verbose test output, traces, and screenshots. <example> Context: Implementation is complete, need to verify e2e tests pass. user: "Run the e2e tests and fix any failures" assistant: "I'll spawn the e2e-test-runner agent to handle this" <commentary> E2e tests produce verbose output. Running in a separate agent keeps main context clean. </commentary> </example>
Use this agent when you have a specification, plan, feature description, or technical document that needs user flow analysis and gap identification. This agent should be used proactively when: <example> Context: The user has just finished drafting a specification for OAuth implementation. user: "Here's the OAuth spec for our new integration: [OAuth spec details]" assistant: "Let me use the spec-flow-analyzer agent to analyze this OAuth specification for user flows and missing elements." <commentary> Since the user has provided a specification document, use the Task tool to launch the spec-flow-analyzer agent to identify all user flows, edge cases, and missing clarifications. </commentary> </example> <example> Context: The user is planning a new social sharing feature. user: "I'm thinking we should add social sharing to posts. Users can share to Twitter, Facebook, and LinkedIn." assistant: "This sounds like a feature specification that would benefit from flow analysis. Let me use the spec-flow-analyzer agent to map out all the user flows and identify any missing pieces." <commentary> The user is describing a new feature. Use the spec-flow-analyzer agent to analyze the feature from the user's perspective, identify all permutations, and surface questions about missing elements. </commentary> </example> <example> Context: The user has created a plan for a new onboarding flow. user: "Can you review this onboarding plan and make sure we haven't missed anything?" assistant: "I'll use the spec-flow-analyzer agent to thoroughly analyze this onboarding plan from the user's perspective." <commentary> The user is explicitly asking for review of a plan. Use the spec-flow-analyzer agent to identify all user flows, edge cases, and gaps in the specification. </commentary> </example> Call this agent when: - A user presents a feature specification, plan, or requirements document - A user asks to review or validate a design or implementation plan - A user describes a new feature or integration that needs flow analysis - After initial planning sessions to validate completeness - Before implementation begins on complex user-facing features - When stakeholders need clarity on user journeys and edge cases
AI SDK guidance for building AI-powered features. Loads correct patterns, warns about deprecated APIs, and guides through chat UIs, agents, structured output, and streaming. Use when building AI features, debugging AI SDK errors, or before implementing any AI work.
Comprehensive codebase audit with verification and specialized reviewers. Generates actionable reports. Use when asked to "audit the codebase", "review code quality", "check for issues", "security review", or "performance audit". Accepts path scope like "apps/web". Verification-only modes (`quick`, `pre-commit`, `pre-pr`) skip reviewer agents and run the mechanical checks directly. Full and focused modes run those checks first, then the reviewer agents. `--harden` preserves the interactive UI resilience flow.
Create a visual identity system — palette, typography, tone, and generated assets. Produces 5 distinct brand directions for the user to choose from, then converges to a complete brand system with tokens and assets. Strongly opinionated against generic tech aesthetics. Use when asked to "create a brand", "define the visual identity", "design the brand", "set up colors and fonts", or before /arc:design for new projects.
Browse a web app through an expert persona — evaluate the rendered experience, not just the code. Use when asked to "browse the app", "experience the app", "evaluate the app as a user", or when you want expert-level quality assessment of the rendered product. Supports designer, first-time-user, strategist, and demo-presenter personas. Chrome MCP preferred, agent-browser as fallback.
Smart commit and push with auto-splitting across domains. Creates atomic commits. Use when asked to "commit", "push changes", "save my work", or after completing implementation work. Automatically groups changes into logical commits.
Dependency audit, alternative discovery, and batch upgrades with test verification. Use when asked to "check dependencies", "audit packages", "update dependencies", "find outdated packages", or "check for CVEs". Generates a prioritized report, then optionally walks through batch upgrades with rollback on failure.
Create distinctive, non-generic UI designs with aesthetic direction and wireframes, or polish existing UI code after implementation. Use when asked to "design the UI", "create a layout", "wireframe this", or when building UI that should be memorable rather than generic. Also handles "clean this up", "componentize this", and other post-implementation polish work. Avoids AI slop patterns.
Internal skill for creating implementation plans. Invoked by /arc:implement, not directly. Creates detailed plans with exact file paths, test code, and TDD cycles.
Generate documentation for your codebase — reference docs for a file, feature guides, or a full documentation site. Use when asked to "document this", "generate docs", "write documentation", "create API reference", or when you need thorough documentation for a module, feature, or entire project. Framework-aware: detects Fumadocs, Nextra, Docusaurus, etc. and generates in the right format.
The main entry point. Understands your codebase and routes to the right workflow. Use when starting a session, saying "let's work on something", or unsure which Arc command to use. Gathers context and asks what you want to do.
Show all Arc commands with context-aware relevance. Reads the codebase to understand what's present (framework, tests, plans, design docs, etc.) and annotates each command with whether it's relevant right now. Use when asked "what can arc do", "help", "list commands", "what commands are available", or "how does arc work".
Install Claude Code hooks and git hooks for automatic formatting, linting, and context monitoring. Use when setting up a project, after "install hooks", "set up hooks", "add auto-formatting", "add git hooks", "set up husky", or when starting a new project that uses Biome.
Turn ideas into validated designs through collaborative dialogue with built-in expert review. Use when asked to "design a feature", "plan an approach", "think through implementation", or when starting new work that needs architectural thinking before coding.
Scope-aware implementation workflow with TDD and continuous quality checks. Use when asked to "implement this", "build this feature", "execute the plan", or after /arc:ideate has created a design doc. For small work it creates a lightweight inline plan; for larger work it creates or loads a full implementation plan and executes task-by-task with build agents.
Production readiness checklist covering domains, SEO, security, and deployment. Use when asked to "ship it", "deploy to production", "go live", "launch", or when preparing a project for production deployment.
Generate and validate project names. Reads codebase context, produces candidates using tech naming strategies, and checks domain + GitHub availability. Use when naming a new project, renaming, or validating an existing name.
Internal skill for progress journal management. Other skills append to docs/arc/progress.md for cross-session context. Not invoked directly by users.
Kill orphaned Claude subagent processes that didn't exit cleanly. Use when asked to "prune agents", "clean up agents", "kill orphaned processes", or when subagents accumulate from Task tool usage.
Discover architectural friction and propose structural refactors with competing interface designs. Focuses on deepening shallow modules, consolidating coupled code, and improving testability. Use when asked to "improve the architecture", "find refactoring opportunities", "deepen modules", "consolidate coupling", "make this more testable", or "find architectural friction".
Audit and fix responsive/mobile issues across every page of a project, using browser screenshots at two breakpoints (375px mobile, 1440px desktop). Design-aware: reads existing design docs to preserve aesthetic intent, not just "make it fit." Use when asked to "make it responsive", "fix mobile", "responsive audit", or after building a desktop-first UI that needs mobile adaptation.
Run expert review on a plan or branch diff with parallel reviewer agents. Presents findings as Socratic questions. Use when asked to "review the plan", "get feedback on the design", "check this approach", "review my changes", "review the diff", or before implementation to validate architectural decisions. Optional argument: reviewer name (e.g., `/arc:review daniel-product-engineer`) or `--diff` to review branch changes
Apply Arc's coding rules to the current project. Copies rules to .ruler/ directory. Use when asked to "set up coding rules", "apply standards", "configure rules", or when starting a project that should follow Arc's conventions.
Deep SEO audit for web projects. Analyzes codebase for crawlability, indexability, on-page SEO, structured data, social previews, and technical foundations. Optionally runs Lighthouse and PageSpeed against a live URL. Reports findings with severity, offers direct fixes or /arc:detail plans. Use when asked to "audit SEO", "check SEO", "review SEO", or "is my site SEO-ready".
Opinionated recommendations for what to work on next based on Linear issues, tasks, and codebase. Use when asked "what should I work on", "what's next", "suggest priorities", or when starting a session and unsure where to begin.
Comprehensive testing strategy. Creates test plans covering unit, integration, and E2E. Uses specialist agents for each test type. Supports vitest and Playwright with auth testing guidance for Clerk and WorkOS.
Clean up completed plans in docs/arc/plans/. Archives or deletes finished plans. Use when asked to "clean up plans", "tidy the docs", "archive old plans", or after completing implementation to remove stale planning documents.
Use when starting any conversation - establishes Arc's skill routing, instruction priority, and bootstrap rules
Create or review a high-level vision document capturing project goals and purpose. Use when asked to "define the vision", "what is this project", "set goals", or when starting a new project that needs clarity on purpose and direction.
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Uses power tools
Uses Bash, Write, or Edit tools
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams