From ts-dev-kit
Executes tasks from TASK_N.md files or free-form descriptions, auto-generating missing scope, success criteria, and verification plans via /generate-tasks before implementation.
npx claudepluginhub jgamaraalv/ts-dev-kit --plugin ts-dev-kit[task-md-file-path | task-description]This skill uses the workspace's default tool permissions.
<trigger_examples>
Executes work from plans, task packs, or prompts: triages input complexity, scans repos for patterns and tests, builds task lists, implements features while following conventions and maintaining quality.
Executes coding tasks from plan documents or prompts: triages input complexity, builds task lists, implements systematically following patterns, verifies with tests. Use to ship complete features efficiently.
Executes coding tasks from plans, specs, or prompts: triages input, scans codebase for patterns/tests, assesses complexity, implements systematically to ship complete features with quality.
Share bugs, ideas, or general feedback.
<trigger_examples>
<phase_0_intake> Resolve the input and ensure the task document is complete before executing.
Step 1 — Determine input type:
$ARGUMENTS looks like a file path (ends in .md, or starts with /, ./, docs/, or contains a directory separator): read the file.Step 2 — Validate the task document: A task document is ready for execution when it contains ALL of:
## Scope — Files with at least one file entry## Success Criteria with at least one testable criterion## Verification Plan with baseline and post-change checksIf all required sections are present → proceed to phase 1.
Step 3 — Generate the task document: If the document is missing any required section, OR the input was a free-form description:
Task document incomplete. Generating a structured task document via
/generate-tasksbefore proceeding.
Skill(skill: "generate-tasks") passing the original input as context.TASK_N.md file.<phase_1_context_analysis> Before writing any code, build a mental model of the task scope.
<phase_2_role_assignment> Determine the full execution context: role persona, domain area, domain technologies, project context, required skills, and available MCPs.
<domain_areas> Map the task to one or more domain areas AND sub-areas. Sub-areas determine the specialist agent type and skill set.
Backend
| Sub-area | Agent type | When | Key skills |
|---|---|---|---|
| Database | database-expert | Schema design, migrations, complex queries, indexes | drizzle-pg, postgresql |
| Endpoints | api-builder | Routes, validation, handlers, API contracts | fastify-best-practices |
| Queues | general-purpose | Job processing, workers, schedulers | bullmq, ioredis |
| Security | security-scanner | Auth flows, RBAC, input sanitization | owasp-security-review |
Frontend
| Sub-area | Agent type | When | Key skills |
|---|---|---|---|
| Components | react-specialist | Component architecture, hooks, state management, composition | react-best-practices, composition-patterns |
| Pages/routing | general-purpose | Pages, layouts, data fetching, RSC boundaries | nextjs-best-practices |
| UI/UX | ux-optimizer | User flows, form UX, friction reduction | ui-ux-guidelines |
| Accessibility | accessibility-pro | WCAG compliance, keyboard nav, screen readers | — |
| Performance | performance-engineer | Core Web Vitals, bundle size, re-renders | react-best-practices |
Shared packages — typescript-pro or general-purpose (shared types, schemas, utilities)
Cross-cutting specialists (use alongside any domain)
| Agent type | When |
|---|---|
test-generator | Writing test suites, improving coverage |
debugger | Investigating errors, stack traces |
docker-expert | Dockerfiles, compose, container config |
code-reviewer | Post-implementation review |
playwright-expert | E2E browser tests |
A single task may require agents from multiple sub-areas. For example, "add a new resource endpoint with list UI" needs database-expert (schema) + api-builder (route) + react-specialist (component).
</domain_areas>
<domain_technologies> Read the relevant package.json to identify installed packages and their versions. Use the actual versions found — do not assume. </domain_technologies>
<role_persona> Compose a role persona that combines the domain sub-area, project context, technologies, and task nature.
- "You are a database specialist working with [ORM] and [database] on schema design for [brief project description from CLAUDE.md]" - "You are a React component architect specialized in composition patterns, hooks, and state management with [React version] and [UI library]" - "You are a [framework] API developer building REST endpoints with validation and type-safe request/response contracts" - "You are a frontend performance engineer optimizing Core Web Vitals and bundle size for a [framework] application" - "You are a queue specialist designing job flows with [queue library] and retry strategies"Discover the actual technologies and versions from package.json. Describe the project based on CLAUDE.md or README. </role_persona>
<required_skills> Call the Skill tool for each relevant skill before writing any code or dispatching agents. Skills inject domain-specific rules and best practices.
Identify the required skills from the domain area, then call each one:
Skill(skill: "fastify-best-practices")
Skill(skill: "drizzle-pg")
Skill(skill: "postgresql")
<skill_map> Match skills to the sub-area identified in domain_areas:
Backend sub-areas:
Frontend sub-areas:
Cross-cutting → combine skills from each sub-area involved. </skill_map>
In ALL execution modes: include explicit Skill() call instructions in each subagent prompt (see references/agent-dispatch.md). The orchestrator does not need to load skills itself — agents load them before writing code. </required_skills>
<available_mcps> Identify MCPs that can assist execution:
Context7 usage — MANDATORY for config and versioned APIs: Many tools have breaking config changes across minor versions. Before writing or modifying ANY configuration file for versioned tools (OTel collector, Prometheus, Grafana, Loki, Tempo, Docker Compose, Drizzle, Fastify, Next.js, Redis, BullMQ, Nginx, etc.), you MUST query Context7 first to verify the correct syntax for the installed version. Do NOT try variations blindly — guess-looping config fixes wastes context and compounds errors.
mcp__context7__resolve-library-id — resolve the library name (e.g., "fastify", "drizzle-orm", "next", "react", "bullmq") to its Context7 ID.mcp__context7__query-docs — query with the specific API, config key, or pattern you need.This applies to ALL roles: orchestrator, dispatched agents, and ad-hoc specialists. Check the project's package.json for installed versions first. In MULTI-ROLE mode, include these Context7 instructions in each agent prompt so agents query docs themselves before writing config. </available_mcps> </phase_2_role_assignment>
<phase_2b_multi_role_decomposition> When a task spans multiple domains, decompose it into separate roles with isolated execution contexts.
<when_to_decompose> Decompose into multiple roles when ANY of these apply:
Do not decompose if:
<decomposition_rules> <rule_1_define_roles_independently> Each role gets its own persona, skill set, context files, and success criteria.
Role A: Database specialist (sub-area: Database). Agent: database-expert (or ts-dev-kit:database-expert if plugin-scoped). Task: design schema + migration for the new feature. Skills: drizzle-pg, postgresql. Role B: API endpoint developer (sub-area: Endpoints). Agent: api-builder (or ts-dev-kit:api-builder). Task: build REST routes consuming the new schema. Skills: fastify-best-practices. Role C: Component architect (sub-area: Components). Agent: react-specialist (or ts-dev-kit:react-specialist). Task: build the result card and list components. Skills: react-best-practices, composition-patterns. Role D: Page builder (sub-area: Pages/routing). Agent: general-purpose (ad-hoc). Task: wire components into the search results page with data fetching. Skills: nextjs-best-practices. Role E: TypeScript library developer. Agent: typescript-pro (or ts-dev-kit:typescript-pro). Task: add shared schemas and types. Skills: none extra.<rule_2_dispatch_agents> For each role, spawn a specialized subagent via the Task tool.
Selecting the agent type:
.claude/agents/): short name — e.g., database-expertts-dev-kit:database-expert
Before dispatching, check which agents are available in your context. If you see agents with a ts-dev-kit: prefix, use the prefixed name as subagent_type. If agents are available by short name, use the short name.subagent_type.general-purpose as subagent_type and embed the full role definition directly in the prompt — this creates an ad-hoc specialist without needing a .md file.Ad-hoc agent creation — when subagent_type: "general-purpose" is used as a specialist surrogate, the prompt must include:
Skill(skill: "...") calls for domain-specific rules.<example_adhoc_agent> Task( description: "Implement notification worker", subagent_type: "general-purpose", model: "sonnet", prompt: """ You are a queue specialist working on this project.
Your expertise: Redis-backed job queues, workers, flow producers, retry strategies, rate limiting, and graceful shutdown.
[task details...]
Discover from the codebase:
Each agent prompt must follow the template in references/agent-dispatch.md. The agent must be able to complete its work independently. </rule_2_dispatch_agents>
<rule_3_execution_order> Decide the execution order based on file conflicts and dependencies:
isolation: "worktree" on the Task tool. Each agent gets an isolated copy of the repository; the worktree is auto-cleaned if the agent makes no changes.Decision tree for overlapping files:
isolation: "worktree" on each Task() call, then merge results.<rule_4_model_selection> Choose the model for each dispatched agent based on task complexity:
Set the model parameter on the Task tool call accordingly.
</rule_4_model_selection>
</decomposition_rules>
<dispatch_pattern> The main session acts as the orchestrator:
Constraints:
<execution_mode_decision> At the end of phase 2, make an explicit execution mode decision and state it to the user:
EXECUTION MODE: SINGLE-ROLE — Single domain. I will dispatch 1-2 focused agents for implementation.
OR
EXECUTION MODE: MULTI-ROLE — Multiple domains. I will dispatch specialized agents across domains via the Task tool.
OR
EXECUTION MODE: PLAN — The task is highly complex. I will enter plan mode to design a structured implementation plan before executing.
CRITICAL: In ALL execution modes, the orchestrator (main session) NEVER writes application code directly. All implementation — components, hooks, pages, routes, services, tests — is delegated to agents via the Task tool. The orchestrator's role is: context gathering, agent dispatch, output review, integration glue (under 15 lines), and quality gates.
The difference between SINGLE-ROLE and MULTI-ROLE is decomposition complexity, NOT whether agents are used:
Use PLAN mode when:
In PLAN mode: use EnterPlanMode to design the full plan. Once the user approves it, exit plan mode and execute phases sequentially as orchestrator, with context cleanup between major phases when needed. </execution_mode_decision> </phase_2b_multi_role_decomposition>
<phase_3_task_analysis> Read the task document and load the criteria defined there. These are binding requirements for this execution.
Extract from the task document:
State them to the user:
Task loaded: [task title]
- Dependencies: [list or "none"]
- Files in scope: N files
- Success criteria: [count] criteria
- Baseline checks: [list]
- Post-change checks: [list]
- Performance targets: [list or "none defined"]
For questions about project libraries, use Context7 (mcp__context7__resolve-library-id → mcp__context7__query-docs) to query up-to-date documentation. If anything is ambiguous, ask the user before proceeding.
Check MCP availability — use ToolSearch to detect which browser MCPs are available (playwright, chrome-devtools, or neither), then confirm against the task's "MCP Checks" section.
Plan the implementation order from the task's file scope — build dependencies before dependents: shared types → database schema → API layer → UI components → pages → tests.
Confirm the verification plan with the user:
Verification plan:
- Baseline checks: [from task doc]
- MCPs available: [detected list or "none — shell-only"]
- Post-change checks: [from task doc] </phase_3_task_analysis>
<phase_3b_baseline_capture> MANDATORY. Run the verification plan before writing any code to establish the baseline for comparison. Do NOT skip this phase.
Step 1: Standard quality gates — run and record results (pass/fail, counts, bundle sizes). Discover the exact commands from package.json scripts for each affected package.
Step 2: MCP-based checks — follow this decision tree in order:
localhost or the configured URL).
b. If dev server is accessible: navigate to each affected page, capture screenshots of key states, and measure performance (LCP, load time). Use Chrome DevTools traces or Playwright screenshots as appropriate.
c. If dev server is NOT accessible: ask the user whether to start it or skip visual checks. Do NOT silently skip — the user must confirm.Step 3: Store baseline — all values captured here are compared against post-change results in phase 5b.
When visual/performance checks are skipped, state the reason:
Baseline captured. MCP-based visual checks skipped — [reason: no browser MCPs available | dev server not running (user confirmed skip) | no frontend pages affected].
The orchestrator ALWAYS runs baseline capture before dispatching any agents, regardless of execution mode. </phase_3b_baseline_capture>
<phase_4_execution> CRITICAL: The orchestrator (main session) NEVER writes application code. All implementation is dispatched to agents via the Task tool. The orchestrator may only write trivial glue (under 15 lines total): barrel file exports, small wiring imports, or config one-liners.
Before dispatching, check the execution mode decision from phase 2 to determine decomposition complexity.
<agent_dispatch_protocol> This protocol applies to ALL execution modes (SINGLE-ROLE, MULTI-ROLE, and PLAN).
As orchestrator, your responsibilities are: context gathering, agent dispatch, output review, integration glue, and quality gates. You do NOT write application code (components, hooks, pages, routes, services, tests).
Dispatch steps:
model parameter according to rule_4_model_selection.isolation: "worktree" to each Task() call. This gives each agent an isolated copy of the repository, preventing edit conflicts. Worktrees are auto-cleaned when the agent makes no changes.For the agent prompt template and dispatch details, see references/agent-dispatch.md.
Self-check: If you find yourself creating application files (routes, components, services, hooks, tests, pages), STOP and delegate to an agent instead. </agent_dispatch_protocol>
<build_order> Instruct agents to work from micro to macro — build dependencies before dependents:
Decision tree (include in agent prompts when relevant):
<phase_5_quality_gates> A task is not done until all quality gates pass. Run them in order for every affected package. If any gate fails, fix the issue and re-run all gates from the beginning.
Discover the available commands from package.json scripts for each affected package. Common gates:
tsc, typecheck)lint, eslint)test, vitest)build)For monorepos, run these for each affected workspace/package. Discover the workspace command pattern from CLAUDE.md or the root package.json (e.g., yarn workspace <name> <script>, pnpm --filter <name> <script>, npm -w <name> <script>, turbo run <script> --filter=<name>).
If a gate fails:
debugger agent (for investigation) or the appropriate specialist agent (e.g., typescript-pro for type errors, api-builder for route errors) to fix the issues. The orchestrator's role is to read the error, decide which agent can fix it, and dispatch — never to edit application code itself.<phase_5b_post_change_verification> After all quality gates pass, re-run the verification plan from phase 3b and compare against baseline.
If any regression is found, fix it, re-run phase 5 quality gates, then re-run this phase. Repeat until clean. </phase_5b_post_change_verification>
<phase_6_documentation> After all quality gates pass, review whether the changes require documentation updates:
Only update documentation directly affected by the changes. Do not create new documentation files unless the changes introduce a new package or major feature with no existing docs. </phase_6_documentation>
<orchestrator_anti_patterns>
These are recurring mistakes. Violating any of these rules degrades execution quality and wastes context.
When quality gates (tsc/lint/test/build) fail during execution, dispatch an isolated agent to fix them. Do NOT attempt fixes in the main orchestrator session. Fixing inline exhausts the context window and triggers compaction mid-task, losing critical execution state.
When a tool or config isn't working (e.g., OTel collector pipeline, Prometheus scraping, Docker networking, Drizzle config, Fastify plugin options), use Context7 immediately to query the library docs for the installed version. Do NOT try variations blindly. Query with mcp__context7__resolve-library-id + mcp__context7__query-docs before touching any config file.
If you announce "dispatching 3 agents in parallel", your message MUST contain exactly 3 Task() tool calls. Announcing parallel dispatch and then only sending 1 Task() is a protocol violation. Count your Task() calls before sending. If agents are independent, they go in the SAME message.
The orchestrator reads, analyzes, dispatches, reviews, and runs quality gates. It does NOT write components, hooks, routes, services, migrations, tests, or any application logic. The only code the orchestrator may write is trivial integration glue (under 15 lines): barrel exports, small wiring imports, or config one-liners.
When two or more agents run in parallel and touch any of the same files (shared barrel exports, config files, common modules), they will produce edit conflicts. Always set isolation: "worktree" on each Task() call so each agent works on an isolated copy of the repository. Skip isolation only when agents touch completely separate files.
Before sending any message during execution, verify:
isolation: "worktree" to each Task() call.Do not add explanations, caveats, or follow-up suggestions unless the user explicitly asks. The report is the final output.