By tonone-ai
Orchestrate 23 specialist AI agents as a complete engineering and product team to plan projects, recon codebases, design architectures/UX/databases/infra, implement frontends/backends/mobile/AI features, audit security/performance/tests/observability, generate docs/roadmaps/OKRs, build CI/CD pipelines, and optimize growth analytics. Handles full software lifecycle workflows from strategy to production.
npx claudepluginhub tonone-ai/tonone --plugin warden-threatEngineering lead — orchestrates the team, scopes work, controls depth and budget
Knowledge engineer — architecture docs, ADRs, API specs, system diagrams, onboarding
ML/AI engineer — LLM integration, prompt engineering, RAG, evals, and AI feature design for production
Product strategist — diagnosis-first strategy, roadmap sequencing, competitive positioning, and market decisions
UX designer — user flows, information architecture, wireframes, and interaction design
User researcher — interviews, personas, Jobs-to-Be-Done, and customer feedback synthesis
Data engineer — databases, migrations, pipelines, data modeling
Infrastructure engineer — cloud services, networking, IaC, cost optimization
Visual designer — brand identity, color systems, typography, UI design, and design systems
Head of Product — product strategy, requirements, and engineering handoff via the Helm↔Apex interface
Data analytics & BI engineer — dashboards, metrics design, reporting, data storytelling
Product analyst — metrics architecture, funnel analysis, A/B test design, retention, and growth measurement
Platform engineer — developer experience, golden paths, service catalogs, environment management, internal tooling. Builds what removes friction for the team that exists.
Product marketer — positioning, messaging, value proposition, GTM strategy, and launch copy
Frontend & DX engineer — translates Form's design system into production UI components, pages, and internal tools
QA & testing engineer — test strategy, E2E suites, integration testing, test infrastructure, flaky test triage
DevOps engineer — CI/CD, deployments, GitOps, developer experience
Backend engineer — APIs, system design, performance, distributed systems
Growth engineer — acquisition channels, activation funnels, retention playbooks, and PLG strategy
Mobile engineer — native iOS/Android, cross-platform, app stores, mobile performance
Observability & reliability engineer — SLOs, alerting, instrumentation, incident response. Writes configs and runbooks, doesn't produce roadmaps.
Embedded & IoT engineer — firmware architecture, microcontrollers, OTA updates, edge computing, device protocols
Security engineer — IAM, secrets, threat modeling, hardening, auth, and supply chain security
CTO-level project status from git and codebase state. Use when asked "where are we", "project status", "what's done", or at the start of a work session.
System takeover — take ownership of an existing codebase or inherited system. Use when "we acquired this", "previous team left", "take over this system", "inherited this codebase".
Engineering lead — hand Apex any task and it routes internally. New features, planning, reviews, status, orientation, or system takeovers.
Write an Architecture Decision Record — document what was decided, why, what alternatives were considered, and what trade-offs were accepted. Use when asked to "write an ADR", "document this decision", or "why did we choose X".
Maintain per-repo and cross-repo changelogs — append structured entries after agent work. Use when asked to "log this change", "update changelog", "what changed", "change history".
Map the system architecture — read the codebase, identify services and connections, output a C4-level architecture map as Mermaid diagrams with component descriptions. Use when asked to "map the architecture", "system diagram", "how does this work", or "architecture overview".
Generate onboarding documentation — what this project does, how to set up locally, where things live, key decisions, how to deploy. Written for day-one engineers who know nothing. Use when asked for "onboarding docs", "new engineer guide", "how to get started", or "developer setup".
Generate a polished HTML presentation page and Obsidian Canvas for big releases — new products, takeovers, major migrations. Non-technical audience. Use when asked to "present this", "release announcement", "show what we built", or "stakeholder update".
Documentation reconnaissance for takeover — find all docs, assess accuracy, freshness, coverage, and discoverability, and identify critical knowledge gaps. Use when asked "what docs exist", "documentation assessment", or "knowledge gaps".
Render agent findings as a styled HTML report in the browser. Use when asked for "full report", "detailed report", "show in browser", or when CLI output exceeds the 40-line budget.
Knowledge engineer — architecture docs, ADRs, diagrams, changelogs, onboarding, and reports.
Evaluate model performance — check for accuracy drops, data drift, and error patterns. Use when asked about "model accuracy dropped", "evaluate the model", "check for drift", or "model performance".
Design and implement an AI feature integration — model selection, architecture pattern, system prompt, data flow, error handling, cost estimate. Use when asked to "add AI to this", "LLM integration", "add Claude/GPT", or "AI-powered feature".
Build an ML pipeline — from data to trained model to serving endpoint. Use when asked to "build ML model", "train a model", "prediction pipeline", "classification", or "regression".
Build a production-ready prompt package — system prompt, few-shot examples, output format, edge case handling, eval criteria. Use when asked to "prompt engineering", "build a prompt", "write a system prompt", or "improve this prompt".
ML reconnaissance — inventory all models, pipelines, data sources, and monitoring. Use when asked "what ML do we have", "model inventory", or "ML assessment".
ML/AI engineer — LLM integrations, prompt engineering, model pipelines, evals, RAG.
Competitive analysis ending in a clear positioning call — where to play, how to win. Use when asked to "analyze competitors", "competitive landscape", "how do we compare to X", "competitive positioning", "where should we play", "find our white space", or "who else does this".
Strategic narrative — write a standalone strategy memo that frames product direction, bets, and rationale for a planning horizon. Use when asked to "write a strategy doc", "product vision", "strategic narrative", "company strategy memo", "planning memo", or "explain our product direction".
OKR design — create objectives and key results with a North Star metric, input metrics tree, and cadence. Use when asked to "set OKRs", "define our objectives", "what should we measure this quarter", "design our OKR framework", "build a metrics tree", or "what's our North Star".
Strategic context reconnaissance — read existing roadmaps, OKRs, competitive docs, and briefs to establish context before planning. Use when asked to "understand our strategy", "what's the current roadmap", "what OKRs do we have", "strategic context", or before starting any prioritization or roadmap work.
Build a product roadmap with sequenced bets and explicit tradeoffs. Use when asked to "build a roadmap", "prioritize the backlog strategically", "what do we build next quarter", "sequence our bets", "what should we focus on", or "product strategy for the next N months".
Product strategist — roadmaps, competitive analysis, OKRs, strategic narratives.
Use when asked to design a user flow, map how a user moves through a feature, create a wireframe or flow diagram, or document interaction design for a product brief. Examples: "design the flow for X", "map out the user journey", "create a wireframe for this feature", "how should the UX work for this".
Information architecture — design navigation structure, content hierarchy, sitemap, and taxonomy for a product or feature set. Use when asked to "organize the navigation", "information architecture", "how should content be structured", "sitemap", "nav redesign", "where should X live", or "content hierarchy".
Use when asked to structure a landing page, design page layout for conversion, or plan landing page information architecture. Examples: "landing page structure for SaaS", "conversion-optimized layout"
Use when asked about UX patterns, interaction best practices, form design, navigation patterns, or loading states. Examples: "best practice for form validation", "navigation pattern for dashboard", "loading state UX"
UI and UX reconnaissance — scan existing frontend routes, components, navigation, and flows to understand the current UX state before designing. Use when asked to "understand the current UI", "what UX patterns exist", "map the navigation", "what screens exist", or before starting any flow or wireframe work.
Usability review — evaluate an existing flow or UI against usability heuristics, flag friction points, and recommend fixes. Use when asked to "review the UX", "usability audit", "what's wrong with this flow", "UX feedback", "critique this design", or "why are users dropping off here".
Text and Mermaid wireframes — produce screen-level layouts with content hierarchy, component placement, and interaction annotations. Use when asked to "wireframe this", "sketch the UI", "layout for this screen", "lo-fi mockup", "screen design", or "what should this page look like".
UX designer — user flows, information architecture, wireframes, and interaction design.
Feedback synthesis — cluster support tickets, NPS verbatims, app store reviews, and churn surveys by theme, separate signal from noise, and produce an actionable insight report. Use when asked to "synthesize this feedback", "analyze support tickets", "what are users complaining about", "NPS analysis", "churn feedback synthesis", or "what's the feedback telling us".
Run a user interview — produce an interview guide and synthesize the output into an actionable insight report. Use when asked to "run a user interview", "synthesize these interview notes", "what do users actually want", "build a persona from this feedback", "find the JTBD in these transcripts", or "analyze this interview data".
Jobs-to-Be-Done analysis — given a product, user descriptions, transcripts, or tickets, produce a JTBD job map with switching forces analysis and opportunity ranking. Use when asked to "find the JTBD", "what jobs are users hiring us for", "job mapping", "what are users really trying to do", "JTBD framework", or "why are users switching".
User research reconnaissance — survey existing personas, research docs, interview notes, and feedback artifacts to establish what is already known about users. Use when asked to "what research exists", "review existing personas", "what do we know about our users", or before starting new research or synthesis work.
User segmentation and persona creation from mixed data sources — analytics, CRM, support tickets, reviews, or any combination. Use when asked to "build personas", "who are our users", "segment our users", "create user profiles", "define user archetypes", or "who is the target user".
User researcher — interviews, personas, Jobs-to-Be-Done, and customer feedback synthesis.
Data quality and pipeline health check — freshness, schema drift, null rates, orphaned records, pipeline status. Use when asked about "data quality check", "pipeline health", "is our data fresh", or "schema drift".
Build zero-downtime database migrations — forward SQL, rollback SQL, deployment sequence. Use when asked to "write migration", "schema change", "add column", "rename table", "drop column", or "migrate safely".
Build a data pipeline — ETL/ELT with extraction, transformation, loading, error handling, and scheduling. Use when asked to "build ETL", "data pipeline", "move data from X to Y", or "sync data".
Optimize slow database queries — analyze execution plans, add indexes, rewrite queries. Use when asked about "slow query", "optimize SQL", "query performance", or "explain this query".
Database reconnaissance — full inventory of schema, migrations, data volume, backups, connection pooling, and query patterns. Use when asked to "assess this database", "understand the schema", or "database health check".
Design and build database schema — tables, columns, types, indexes, constraints, relationships. Given a domain description, output the schema and write the files. Use when asked to "design schema", "database design", "create tables", or "data model".
Data engineer — databases, migrations, pipelines, schema design, and query optimization.
Audit existing infrastructure for security issues, waste, and misconfigurations. Use when asked to "audit my infra", "check cloud setup", "infra review", "are we wasting money", "security check on infra", or "review my terraform".
Audit cloud infrastructure costs and produce a concrete optimization plan with specific changes and estimated savings. Use when asked to "how much is this costing", "reduce cloud spend", "cost optimization", "are we overpaying", "cloud bill", or "budget for this infra".
Diagnose runtime infrastructure issues — cold starts, timeouts, scaling problems, network failures. Use when asked about "infra is slow", "cold starts", "network issues", "why is this timing out", "scaling problem", "latency spikes", or "service is down".
Build production-grade infrastructure as code for a service or project. Use when asked to "set up infra", "provision infrastructure", "create cloud resources", "IaC for this project", "terraform for this", or "deploy this service".
Design and build networking infrastructure — VPCs, subnets, DNS, load balancers, firewall rules. Use when asked to "set up networking", "VPC design", "configure DNS", "load balancer setup", "network architecture", or "firewall rules".
Infrastructure reconnaissance — inventory all cloud resources, map connections, flag risks. Use when asked to "inventory our infra", "what infrastructure do we have", "map our cloud resources", "infra discovery", or "what's running in our cloud".
Infrastructure engineer — cloud services, IaC, networking, cost optimization.
Use when asked to audit UI for visual quality, check design consistency, review brand alignment, evaluate design system compliance, or find visual issues before a launch. Examples: "audit our UI", "check visual consistency", "review the design for issues", "is our UI on-brand", "find visual bugs", "design QA before launch".
Use when asked to create a brand identity, define visual design direction, generate a color palette or type system, build a style guide, or establish the look and feel for a product. Examples: "create a brand for X", "define the visual identity", "what colors should we use", "build a style guide", "design system foundations".
Use when asked to design a UI component, specify a button, input, card, modal, badge, or any interactive element. Examples: "design a button component", "spec out the input field", "define the card component states", "create a component spec for Prism", "what should the dropdown look like".
Use when asked to design a pitch deck, presentation, or slide set. Examples: "design a pitch deck", "create a sales deck", "make a conference presentation", "build an investor deck", "help me present this to the board", "create slides for X".
Use when asked to design an email template, newsletter, drip campaign email, transactional email, or any HTML email asset. Examples: "design a welcome email", "create a newsletter template", "make an onboarding email sequence", "design a password reset email", "build an email campaign".
Theory-backed design audit — names the principle violated, cites the source, shows the fix. Use when asked to "evaluate design quality", "check if this follows design principles", "theory check", "design exam", "audit against best practices", or "what's wrong with this design".
Use when asked to create a logo, design a brand mark, generate a logo concept, or produce any logo asset. Examples: "create a logo for X", "design a brand mark", "make me a logo", "generate logo concepts", "logo for our product".
Use when asked to design iOS or Android mobile app screens, create mobile UI, spec mobile flows, or produce screen designs for a native app. Examples: "design the onboarding screens", "spec the checkout flow for iOS", "design a home screen for Android", "create mobile UI for this feature".
Use when asked to generate a color palette, create industry-matched colors, or pick colors for a product type. Examples: "color palette for fintech", "healthcare app colors", "SaaS brand colors"
Use when asked to design social media graphics, ad creatives, or marketing assets. Examples: "design a LinkedIn post for our launch", "create ad creatives for our campaign", "make an Instagram story", "design a Twitter card", "create a banner ad", "social assets for the product announcement".
Use when asked to select a UI style, choose a design direction, pick a visual approach for a product, or match a style to an industry. Examples: "what style fits a fintech app", "choose between neumorphism and glassmorphism", "design direction for healthcare SaaS"
Use when asked to define a design token system, create tokens, document tokens, set up CSS custom properties, build a Tailwind token config, establish a spacing scale, define color semantics, or bridge design decisions to code. Examples: "set up design tokens", "define our token system", "create CSS variables for the design system", "document our color tokens", "establish a spacing scale".
Use when asked to design a landing page, marketing website, or any web presence intended to convert or inform. Examples: "design a landing page for X", "create a marketing site", "we need a homepage", "design our website", "build a page for our launch".
Visual designer — brand identity, color systems, typography, design tokens, and UI design.
Scope arbitration — resolve disagreements between product and engineering on what is in or out of scope, with a decision log and escalation path. Use when asked to "resolve this scope disagreement", "arbitrate between product and eng", "scope is creeping", "we can't agree on what's in scope", or "help us decide what to cut".
Use when asked to write a product brief, turn a feature idea into a spec, define requirements for something to build, or clarify what a product should do and why. Examples: "write a brief for X", "turn this idea into a spec", "what should we build here", "help me define requirements".
Use when a product brief is finalized and ready to hand off to the engineering team, or when asked to send a brief to Apex, kick off engineering work, or start development on a product spec. Examples: "hand this off to engineering", "send brief to Apex", "start building this", "kick off dev on this spec".
Use when asked to build a product roadmap, prioritize a backlog, decide what to build next, or sequence a list of feature ideas. Examples: "what should we build next", "prioritize this backlog", "make a roadmap", "RICE score these features".
Product landscape reconnaissance — survey existing briefs, research, strategy, and team output before writing new briefs or dispatching specialists. Use when asked to "understand the product state", "what briefs exist", "what has the team produced", "orient me on this product", or before starting a new product initiative.
Head of product — orchestrate the product team, write briefs, plan initiatives, hand off to Apex.
Review existing analytics — find all dashboards and reports, check who uses them, whether metrics are defined, and whether they drive decisions. Recommend what to keep, kill, or add. Use when asked "are our dashboards useful", "analytics review", or "metrics audit".
Use when asked to select chart types for analytics dashboards, choose BI visualizations, or design data displays. Examples: "best chart for sales data", "dashboard visualization for metrics", "analytics chart selection"
Design and spec an analytical dashboard — define the question each chart answers, write the SQL queries, spec the layout and refresh cadence. Produces a complete dashboard spec ready to implement. Use when asked to "build a dashboard", "analytics dashboard", "BI dashboard", "weekly product health", or "visualize this data".
Produce a complete metrics definition doc — metric name, formula, data source, segmentation, SQL or event tracking spec, and what good/bad looks like. Given a product area, outputs the full metrics spec. Use when asked to "define KPIs", "metrics framework", "what should we measure", "north star metric", or "instrument this feature".
Analytics reconnaissance for takeover — find all analytics tools, inventory what's tracked and dashboarded, assess data freshness and metric definitions, and present a coverage map. Use when asked "what analytics exist", "BI assessment", or "what do we track".
Build a reporting pipeline — scheduled reports with SQL queries, delivery via Slack or email, threshold alerts, and historical comparison. Use when asked for "automated reports", "scheduled report", "email digest", or "Slack alerts for metrics".
Analytics and BI engineer — dashboards, metrics design, reporting pipelines, and data storytelling.
A/B test design — produce an experiment spec with hypothesis, primary metric, MDE, sample size, run time, and decision rule. Also determines when NOT to A/B test and what to do instead. Use when asked to "design an A/B test", "should we test this", "experiment design", "how do we know if this works", "what's the sample size", or "set up an experiment".
Use when asked to analyze a funnel, find where users drop off, diagnose low conversion or activation rates, design a metrics framework, set up OKRs, or measure whether a feature is working. Examples: "analyze our funnel", "why is activation low", "where are users dropping off", "design OKRs for this quarter", "is this feature working", "set up metrics for this launch".
Instrumentation plan — design event taxonomy, property schema, and tracking plan for analytics tools. Use when asked to "what should we track", "instrumentation plan", "set up analytics events", "analytics event schema", "tracking plan", or "instrument this feature".
Metrics architecture — produce a complete metrics plan given a product description. North Star, input metrics tree, instrumentation spec, action triggers, and counter-metrics. Use when asked to "design a metrics framework", "what should we measure", "build a metrics system", "define our KPIs", "what are our success metrics", "metrics strategy", or "what do we track".
Analytics reconnaissance — scan existing event tracking, metric definitions, dashboards, and analytics configuration to understand what is currently being measured. Use when asked to "what are we tracking", "audit our analytics", "what metrics exist", "analytics inventory", or before designing new metrics or instrumentation.
Product analyst — metrics architecture, funnel analysis, A/B test design, retention, and growth measurement.
Audit developer experience — measure onboarding time, build speed, deployment friction, and developer satisfaction. Use when asked to "DX audit", "developer experience review", "why is development slow", "onboarding assessment", or "DORA metrics".
Build a service catalog — schema, starter entries, and governance model. Produces what information to capture per service, how it's maintained, and where it lives. Use when asked to "service catalog", "what services do we have", "catalog our services", "service inventory", or "who owns what".
Set up local development environments — devcontainers, Docker Compose, one-command setup, dev/prod parity. Use when asked to "set up dev environment", "devcontainer", "docker compose for dev", "local development setup", or "one command to run".
Define a golden path — the opinionated, supported way to do a common developer task (create a new service, set up an environment, deploy a feature). Produces concrete steps, templates, and tooling. Use when asked to "golden path", "create project template", "scaffold a new service", "how should we create services", or "standardize our setup".
Platform reconnaissance — inventory all developer tooling, environments, build systems, and developer workflows for project takeover. Use when asked to "understand the dev setup", "developer tooling assessment", "platform assessment", or "how do developers work here".
Platform engineer — developer experience, golden paths, service catalogs, and local dev environments.
Landing page and marketing copy — write hero section, problem/solution blocks, proof points, and CTAs. Use when asked to "write landing page copy", "write the homepage", "marketing copy for this feature", "product page copy", "write the hero section", or "write copy for [surface]".
Use when asked to structure a landing page for positioning, plan a conversion-optimized page layout, or design a launch page. Examples: "landing page for product launch", "conversion-optimized layout for SaaS"
Produce an actual launch plan with announcement copy, channel sequence, and day-1 checklist. Use when asked to "plan a launch", "GTM strategy", "how do we announce this", "launch plan for [feature]", "go-to-market", "write our Product Hunt post", or "how do we get people to notice this".
Messaging framework — produce a full headline, subheadline, proof points, and CTA hierarchy for use across all surfaces. Use when asked to "write our messaging", "messaging framework", "what should our headline say", "copy hierarchy", "tagline and messaging", or "how do we talk about the product".
Produce a complete positioning document using the Dunford framework — competitive alternatives, unique attributes, value, best-fit customer, market category, positioning statement, and tagline. Use when asked to "write our positioning", "define our value prop", "positioning statement", "what market are we in", "how do we position against X", "what's our tagline", or "write our messaging foundation".
Marketing and messaging reconnaissance — read existing landing pages, copy, positioning docs, and marketing materials to understand the current messaging state. Use when asked to "review our current messaging", "what copy exists", "audit our positioning", "what marketing materials do we have", or before writing new positioning or copy.
Product marketer — positioning, messaging, value prop, GTM strategy, and launch copy.
Frontend audit — bundle size, dependencies, accessibility, performance, component quality. Use when asked for "frontend review", "performance audit", "accessibility check", or "bundle size".
Use when asked to implement a chart, select a visualization type, or build a data display component. Examples: "implement chart for time series", "best visualization for comparison data", "chart component for analytics"
Implement a reusable, accessible, typed component from a design spec. Use when asked to "create a component", "build a widget", "implement this design", or "reusable UI element".
Build an internal dashboard with data tables, filters, detail views, and CRUD. Use when asked to build an "admin panel", "internal dashboard", "back office", or "data dashboard UI".
Frontend reconnaissance — map the component tree, routing, state management, build config, and assess quality. Use when asked to "understand this frontend", "frontend assessment", or "what's the UI built with".
Use when asked for framework-specific best practices, implementation guidelines for React/Vue/Svelte/Next.js, or stack-specific patterns. Examples: "React best practices", "Vue component patterns", "Next.js performance"
Implement a complete UI screen or feature from a Form visual spec. Use when asked to "build a page", "implement this screen", "build the frontend for this feature", or "create this UI".
Frontend engineer — UI components, dashboards, design system implementation, and frontend audits.
Build API test suites — endpoint testing, contract testing, load testing for REST/GraphQL/gRPC APIs. Use when asked to "test this API", "API tests", "endpoint testing", "contract tests", or "load test".
Audit test suite health — find flaky tests, slow tests, coverage gaps, and testing anti-patterns. Use when asked to "audit tests", "fix flaky tests", "why are tests slow", "test health", or "improve test suite".
Design QA audit — red flags, severity classification, visual quality scorecard. Use when asked to "QA the design", "check visual quality", "design review before launch", "visual bugs", "design audit", or "does this look right".
Build E2E test specs for critical user journeys — Playwright or Cypress, page objects, setup/teardown, CI config. Use when asked to "write E2E tests", "end-to-end testing", "browser tests", "UI tests", or "Playwright tests".
Testing reconnaissance — inventory all tests, frameworks, coverage, CI integration, and assess testing maturity for project takeover. Use when asked to "understand the tests", "testing assessment", "what's tested", or "test inventory".
Produce a test strategy for a project or feature — risk map, test type decisions, coverage targets, CI config. Use when asked to "create test strategy", "what should we test", "testing plan", or "improve test coverage".
QA and testing engineer — test strategy, E2E suites, API tests, flaky test triage, coverage.
Audit an existing CI/CD pipeline for slowness, security issues, and reliability gaps. Use when asked to "audit pipeline", "why is CI slow", "pipeline review", or "deployment review".
Set up a complete deployment configuration — Dockerfile, deployment manifest, environment config, and rollback procedure. Use when asked about "deployment setup", "how do I deploy this", "deployment strategy", or "rollback plan".
Build production-ready Dockerfiles with multi-stage builds, security hardening, and docker-compose for local dev. Use when asked to "create Dockerfile", "optimize container", or "dockerize this".
Build a full CI/CD pipeline from scratch. Use when asked to "set up CI/CD", "create pipeline", or "automate deploys".
Map the full CI/CD pipeline — triggers, build, test, deploy flow — with risk assessment. Use when asked "how does this deploy", "map the pipeline", or "understand CI/CD".
End-to-end ship workflow — merge base, run tests, review diff, bump version, commit, push, create PR. Use when asked to "ship", "push to main", "create a PR", "get this merged", or "deploy this branch".
DevOps engineer — CI/CD pipelines, deployments, GitOps, Docker, and developer experience.
Design and spec an API — endpoints, request/response shapes, error codes, auth pattern, pagination. Applies Stripe's consistency principles. Use when asked to "design an API", "build API endpoints", "create REST API", or "API for this feature".
Produce a system design doc — components, data flow, decisions made, tradeoffs, failure modes. Not a list of options. An actual design with calls made. Use when asked for "system design for", "architect this", "how should we build", or "design the backend".
Find and fix performance bottlenecks — N+1 queries, missing indexes, sync bottlenecks, caching gaps. Use when asked "why is this slow", "performance issue", "optimize this endpoint", or "N+1 queries".
Backend reconnaissance — map all routes, middleware, models, dependencies, auth, and assess code quality for project takeover. Use when asked to "understand this backend", "map the API", or "assess code quality".
API and backend code review — REST conventions, auth, validation, error handling, pagination, rate limiting, test coverage. Use when asked to "review this API", "code review", "review backend", or "pre-launch backend check".
Build a new production-ready service from scratch — config management, health checks, graceful shutdown, structured logging. Use when asked to "new service", "scaffold a backend", "bootstrap service", or "create microservice".
Backend engineer — APIs, system design, performance, distributed systems, and service scaffolding.
Use when asked to improve activation, map the growth funnel, identify growth levers, design a referral program, build a retention playbook, develop a PLG strategy, or find where to invest in growth. Examples: "how do we grow faster", "improve our activation rate", "design a referral program", "build a retention playbook", "what are our best growth levers", "map our growth funnel".
Growth experiment design — structure a growth hypothesis, define metric, baseline, expected lift, and kill condition for a single experiment. Use when asked to "design a growth experiment", "test this growth idea", "experiment framework", "how do we test if this works", or "growth hypothesis".
Use when asked to design growth-optimized landing pages, activation funnel layouts, or experiment-friendly page structures. Examples: "growth-optimized landing", "activation funnel layout", "A/B testable page"
PLG motion design — free tier definition, activation sequence, expansion trigger points, viral mechanic assessment. Given a product, output the PLG architecture and make the calls. Use when asked to "PLG strategy", "freemium model", "product-led growth plan", "self-serve motion", "how do we add a free tier", "upgrade triggers", or "viral loop design".
Growth state reconnaissance — scan existing onboarding flows, acquisition channels, conversion funnels, and growth experiment logs to understand current growth state. Use when asked to "what's our growth state", "audit the funnel", "what growth experiments have we run", "acquisition channel inventory", or before designing new growth experiments.
Retention diagnosis + intervention plan — analyze the retention curve, identify the primary drop-off point, and produce a specific intervention plan with expected impact. Use when asked to "improve retention", "why are users churning", "build a retention playbook", "reduce churn", "win-back campaign", or "users aren't coming back".
Growth engineer — acquisition channels, activation funnels, retention playbooks, and PLG strategy.
First-run onboarding tour — guided walkthrough of tonone's 23 agents, key skills, and worktree sessions. Two paths — expert (~90 sec) and newcomer (~8 min). Use when asked "how do I use tonone", "what can tonone do", "show me around", or "first steps".
Produce a complete mobile app architecture design — platform choice, navigation structure, state management, data layer, key screens. Use when asked to "build a mobile app", "new app", "create iOS/Android app", "app architecture", or "cross-platform app".
Mobile audit — app size, startup time, crash reporting, store compliance, accessibility, offline behavior. Use when asked for "mobile review", "app store readiness", "mobile performance", or "crash analysis".
Produce a mobile feature spec — user story, technical approach, component breakdown, platform-specific considerations, edge cases. Use when asked to "add a screen", "spec this feature", "mobile feature", "new tab", "push notifications", or "deep link".
Mobile reconnaissance — understand the app's tech stack, architecture, dependencies, and health for takeover. Use when asked to "understand this app", "mobile assessment", or "app health".
Set up mobile release pipeline — Fastlane, code signing, CI, beta distribution, versioning. Use when asked about "app store setup", "release pipeline", "fastlane", "beta distribution", or "signing".
Plan and scope a project — discovery, challenge assumptions, present S/M/L options with token and cost estimates. Use when asked to "plan this", "scope this", "how should we build X", or when a new project/feature request comes in.
Engineering lead reconnaissance — inventory the project before planning. Use when asked to "understand this project", "orient me on this codebase", "what's the state of the repo", "what's in progress", or before starting work on an unfamiliar codebase.
Cross-cutting review of recent work — catches gaps between specialists. Use when asked to "review what we built", "check the work", "pre-launch review", or after completing a significant chunk of work.
Use when asked about mobile UI guidelines, touch targets, platform-specific UI rules, or mobile interaction patterns. Examples: "iOS touch targets", "Android UI guidelines", "mobile form design"
Mobile engineer — native iOS/Android, cross-platform, app stores, mobile performance.
Write SLO-based alert rules with burn rate thresholds and paired runbooks. Outputs actual alert configs, not a strategy doc. Use when asked to "set up alerts", "create runbooks", "define SLOs", or "alerting strategy".
Verify observability posture — audit monitoring coverage, find blind spots, prioritize gaps. Use when asked "is monitoring sufficient", "observability review", "are we covered", or "pre-launch monitoring check".
Incident response — diagnose production issues, find root cause, propose fix with rollback. Use when asked about "something is broken", "production issue", "why is this down", "incident", or "debug production".
Instrument a service with OpenTelemetry — RED metrics, structured logs, distributed tracing, and health checks. Outputs actual code and config, not a plan. Use when asked to "add monitoring", "instrument this", "add logging", "set up tracing", or "observability".
Observability reconnaissance — inventory what monitoring exists, map coverage, highlight blind spots. Use when asked "what monitoring exists", "observability assessment", or "what can we see".
Observability and reliability engineer — SLOs, alerting, instrumentation, and incident response.
Build a device driver or protocol handler — I2C sensors, BLE services, MQTT clients, SPI peripherals with interrupt-driven I/O and clean HAL abstraction. Use when asked to "write a driver", "I2C device", "BLE service", "MQTT client", or "sensor integration".
Produce a complete firmware architecture spec for a described device — layer diagram, module responsibilities, HAL interface definitions, key state machines, RTOS decision. Use when asked to "design firmware architecture", "plan embedded firmware", "architect an IoT device", "how should I structure this firmware", or given a device description and asked what the firmware should look like.
Produce a complete OTA update system design — partition layout, update flow, rollback conditions, validation checks, fleet management approach, failure modes and recovery. Use when asked about "OTA updates", "firmware updates over the air", "how do I update devices in the field", "OTA strategy", or "remote firmware update design".
Power management audit — analyze sleep modes, wake sources, power state machines, radio duty cycles, and battery life estimates. Use when asked to "audit power usage", "optimize battery life", "review power management", "why is my battery draining", "power budget analysis", or "sleep mode review".
Firmware reconnaissance for takeover — inventory the MCU, peripherals, RTOS, protocols, OTA, power management, and assess code quality with risk flags. Use when asked to "understand this firmware", "device inventory", or "embedded assessment".
Embedded and IoT engineer — firmware, microcontrollers, OTA updates, device protocols.
Full security audit — secrets, dependencies, IAM, auth, injection, XSS, HTTPS, rate limiting, public storage. Use when asked for "security audit", "check for vulnerabilities", "security review", or "are we secure".
Produce a hardening spec and implement it — auth patterns, security headers, rate limiting, input validation, secrets management, dependency hygiene. Use when asked to "harden this", "add security to this service", "what security do I need", or "secure this before launch".
Build IAM from scratch — roles, policies, service accounts with least privilege. Use when asked to "set up IAM", "create roles", "service accounts", or "access control".
Security reconnaissance — full inventory of secrets management, IAM, dependencies, auth, encryption, audit logging, and compliance gaps. Use when asked about "security posture", "how secure is this", or "security assessment".
Produce a threat model — assets, ranked threats, mitigations, accepted risks. Use when asked to "threat model this", "what could go wrong security-wise", "map our attack surface", or before designing any security-sensitive feature.
Security engineer — IAM, secrets, threat modeling, hardening, auth, and supply chain security.
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Share bugs, ideas, or general feedback.
Full-stack agents — frontend, backend, API, DevOps architects
Enhances Claude Code from producing raw code into delivering production-ready systems. 14 specialized agents handle architecture, tested code, security audit, CI/CD, and documentation. Use for building apps/websites/services, adding features, hardening, deployment, testing, review, or architecture design.
Give soul to your workflow. 58 AI-powered skills across 17 roles — PM, Dev, Backend, Frontend, QA, UX, Data, Detect, WordPress, Release, Security, DevOps, and Core. Spec-to-ship pipeline: scaffold, implement, test, secure, deploy. Features two-phase workflow with human approval, quality-reviewer agent, token optimization, and continuous improvement via LEARN.md system.
An engineering team in a box for Claude Code. 12 specialized subagents (planner, fullstack-engineer, refactor-specialist, migration-engineer, frontend-designer, critic, vuln-verifier, debugger, db-expert, onboarder, tool-expert, web-researcher) plus 15 automation hooks (pre-commit secret scan, MCP health tracking, cost tracking, test runner, branch protection, large file warner, session summary, batch format, design quality, config protection, and more) wired by the P7/P9/P10 methodology with three red lines: closure discipline, fact-driven, exhaustiveness.
Production-ready Claude Code configuration with role-based workflows (PM→Lead→Designer→Dev→QA), safety hooks, 44 commands, 19 skills, 8 agents, 43 rules, 30 hook scripts across 19 events, auto-learning pipeline, hook profiles, and multi-language coding standards
'MUST BE USED PROACTIVELY when user mentions: planning, PRD, product requirements document, project plan, roadmap, specification, requirements analysis, feature breakdown, technical spec, project estimation, milestone planning, or task decomposition. Use IMMEDIATELY when user says "create a PRD", "plan this feature", "document requirements", "break down this project", "estimate this work", "create a roadmap", "write specifications", or references planning/documentation needs. Expert Technical Project Manager that creates comprehensive PRDs with user stories, acceptance criteria, technical architecture, task breakdowns, and separate task assignment files for sub-agent delegation.'
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.