From dotnet-test
Detects anti-patterns in .NET test suites (MSTest, xUnit, NUnit, TUnit): no assertions, flakiness, swallowed exceptions, over-mocking, isolation issues, poor naming.
npx claudepluginhub dotnet/skills --plugin dotnet-testThis skill uses the workspace's default tool permissions.
Analyze .NET test code for anti-patterns, code smells, and quality issues that undermine test reliability, maintainability, and diagnostic value.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze .NET test code for anti-patterns, code smells, and quality issues that undermine test reliability, maintainability, and diagnostic value.
writing-mstest-tests)run-tests)| Input | Required | Description |
|---|---|---|
| Test code | Yes | One or more test files or classes to analyze |
| Production code | No | The code under test, for context on what tests should verify |
| Specific concern | No | A focused area like "flakiness" or "naming" to narrow the review |
Read the test files the user wants reviewed. If the user points to a directory or project, scan for all test files (files containing [TestClass], [TestMethod], [Fact], [Test], or [Theory] attributes).
If production code is available, read it too -- this is critical for detecting tests that are coupled to implementation details rather than behavior.
Check each test file against the anti-pattern catalog below. Report findings grouped by severity.
| Anti-Pattern | What to Look For |
|---|---|
| No assertions | Test methods that execute code but never assert anything. A passing test without assertions proves nothing. |
| Swallowed exceptions | try { ... } catch { } or catch (Exception) without rethrowing or asserting. Failures are silently hidden. |
| Assert in catch block only | try { Act(); } catch (Exception ex) { Assert.Fail(ex.Message); } -- use Assert.ThrowsException or equivalent instead. The test passes when no exception is thrown even if the result is wrong. |
| Always-true assertions | Assert.IsTrue(true), Assert.AreEqual(x, x), or conditions that can never fail. |
| Commented-out assertions | Assertions that were disabled but the test still runs, giving the illusion of coverage. |
| Anti-Pattern | What to Look For |
|---|---|
| Flakiness indicators | Thread.Sleep(...), Task.Delay(...) for synchronization, DateTime.Now/DateTime.UtcNow without abstraction, Random without a seed, environment-dependent paths. |
| Test ordering dependency | Static mutable fields modified across tests, [TestInitialize] that doesn't fully reset state, tests that fail when run individually but pass in suite (or vice versa). |
| Over-mocking | More mock setup lines than actual test logic. Verifying exact call sequences on mocks rather than outcomes. Mocking types the test owns. |
| Implementation coupling | Testing private methods via reflection, asserting on internal state, verifying exact method call counts on collaborators instead of observable behavior. |
| Broad exception assertions | Assert.ThrowsException<Exception>(...) instead of the specific exception type. Also: [ExpectedException(typeof(Exception))]. |
| Anti-Pattern | What to Look For |
|---|---|
| Poor naming | Test names like Test1, TestMethod, names that don't describe the scenario or expected outcome. Good: Add_NegativeNumber_ThrowsArgumentException. |
| Magic values | Unexplained numbers or strings in arrange/assert: Assert.AreEqual(42, result) -- what does 42 mean? |
| Duplicate tests | Three or more test methods with near-identical bodies that differ only in a single input value. Should be data-driven ([DataRow], [Theory], [TestCase]). Note: Two tests covering distinct boundary conditions (e.g., zero vs. negative) are NOT duplicates -- separate tests for different edge cases provide clearer failure diagnostics and are a valid practice. |
| Giant tests | Test methods exceeding ~30 lines or testing multiple behaviors at once. Hard to diagnose when they fail. |
| Assertion messages that repeat the assertion | Assert.AreEqual(expected, actual, "Expected and actual are not equal") adds no information. Messages should describe the business meaning. |
| Missing AAA separation | Arrange, Act, Assert phases are interleaved or indistinguishable. |
| Anti-Pattern | What to Look For |
|---|---|
| Unused test infrastructure | [TestInitialize]/[SetUp] that does nothing, test helper methods that are never called. |
| IDisposable not disposed | Test creates HttpClient, Stream, or other disposable objects without using or cleanup. |
| Console.WriteLine debugging | Leftover Console.WriteLine or Debug.WriteLine statements used during test development. |
| Inconsistent naming convention | Mix of naming styles in the same test class (e.g., some use Method_Scenario_Expected, others use ShouldDoSomething). |
Before reporting, re-check each finding against these severity rules:
Test1.[TestInitialize] (this improves isolation). Tests that are short and clear but could theoretically be consolidated.IMPORTANT: If the tests are well-written, say so clearly up front. Do not inflate severity to justify the review. A review that finds zero Critical/High issues and only minor Low suggestions is a valid and valuable outcome. Lead with what the tests do well.
Present findings in this structure:
If there are many findings, recommend which to fix first:
| Pitfall | Solution |
|---|---|
| Reporting style issues as critical | Naming and formatting are Medium/Low, never Critical |
| Suggesting rewrites instead of targeted fixes | Show minimal diffs -- change the assertion, not the whole test |
| Flagging intentional design choices | If Thread.Sleep is in an integration test testing actual timing, that's not an anti-pattern. Consider context. |
| Inventing false positives on clean code | If tests follow best practices, say so. A review finding "0 Critical, 0 High, 1 Low" is perfectly valid. Don't inflate findings to justify the review. |
| Flagging separate boundary tests as duplicates | Two tests for zero and negative inputs test different edge cases. Only flag as duplicates when 3+ tests have truly identical bodies differing by a single value. |
| Rating cosmetic issues as Medium | Naming mismatches (e.g., method name says ArgumentException but asserts ArgumentOutOfRangeException) are Low, not Medium -- the test still works correctly. |
| Ignoring the test framework | xUnit uses [Fact]/[Theory], NUnit uses [Test]/[TestCase], MSTest uses [TestMethod]/[DataRow] -- use correct terminology |
| Missing the forest for the trees | If 80% of tests have no assertions, lead with that systemic issue rather than listing every instance |