From dotnet-test
Generates unit tests for codebases in C#, TypeScript, JavaScript, Python, Go, Rust, Java via multi-agent pipeline. Ensures tests compile, pass, and match project conventions for coverage improvement.
npx claudepluginhub dotnet/skills --plugin dotnet-testThis skill uses the workspace's default tool permissions.
An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline.
Use this skill when you need to:
run-tests skill)writing-mstest-tests)This skill coordinates multiple specialized agents in a Research → Plan → Implement pipeline:
┌─────────────────────────────────────────────────────────────┐
│ TEST GENERATOR │
│ Coordinates the full pipeline and manages state │
└─────────────────────┬───────────────────────────────────────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────────┐
│ RESEARCHER│ │ PLANNER │ │ IMPLEMENTER │
│ │ │ │ │ │
│ Analyzes │ │ Creates │ │ Writes tests │
│ codebase │→ │ phased │→ │ per phase │
│ │ │ plan │ │ │
└───────────┘ └───────────┘ └───────┬───────┘
│
┌─────────┬───────┼───────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐
│ BUILDER │ │TESTER │ │ FIXER │ │LINTER │
│ │ │ │ │ │ │ │
│ Compiles│ │ Runs │ │ Fixes │ │Formats│
│ code │ │ tests │ │ errors│ │ code │
└─────────┘ └───────┘ └───────┘ └───────┘
Make sure you understand what user is asking and for what scope. When the user does not express strong requirements for test style, coverage goals, or conventions, source the guidelines from unit-test-generation.prompt.md. This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns.
Start by calling the code-testing-generator agent with your test generation request:
Generate unit tests for [path or description of what to test], following the [unit-test-generation.prompt.md](unit-test-generation.prompt.md) guidelines
The Test Generator will manage the entire pipeline automatically.
The code-testing-researcher agent analyzes your codebase to understand:
Output: .testagent/research.md
The code-testing-planner agent creates a structured implementation plan:
Output: .testagent/plan.md
The code-testing-implementer agent executes each phase sequentially:
code-testing-builder sub-agent to verify compilationcode-testing-tester sub-agent to verify tests passcode-testing-fixer sub-agent if errors occurcode-testing-linter sub-agent for code formattingEach phase completes before the next begins, ensuring incremental progress.
All pipeline state is stored in .testagent/ folder:
| File | Purpose |
|---|---|
.testagent/research.md | Codebase analysis results |
.testagent/plan.md | Phased implementation plan |
.testagent/status.md | Progress tracking (optional) |
Generate unit tests for my Calculator project at C:\src\Calculator
Generate unit tests for src/services/UserService.ts
Add tests for the authentication module with focus on edge cases
| Agent | Purpose |
|---|---|
code-testing-generator | Coordinates pipeline |
code-testing-researcher | Analyzes codebase |
code-testing-planner | Creates test plan |
code-testing-implementer | Writes test files |
code-testing-builder | Compiles code |
code-testing-tester | Runs tests |
code-testing-fixer | Fixes errors |
code-testing-linter | Formats code |
The code-testing-fixer agent will attempt to resolve compilation errors. Check .testagent/plan.md for the expected test structure. Check the extensions/ folder for language-specific error code references (e.g., extensions/dotnet.md for .NET).
Most failures in generated tests are caused by wrong expected values in assertions, not production code bugs:
[Ignore] or [Skip] just to make them passSpecify your preferred framework in the initial request: "Generate Jest tests for..."
Tests that depend on external services, network endpoints, specific ports, or precise timing will fail in CI environments. Focus on unit tests with mocked dependencies instead.
During phase implementation, build only the specific test project for speed. After all phases, run a full non-incremental workspace build to catch cross-project errors.