Use when creating leaf types, after refactoring, during implementation, or when testing advice is needed. Automatically invoked to write tests for new types, or use as testing expert advisor. Covers unit, integration, and system tests with emphasis on in-memory dependencies. Ensures 100% coverage on leaf types with public API testing.
Automatically writes comprehensive Go tests for new types and refactored code. Creates unit, integration, and system tests with in-memory dependencies for 100% coverage on leaf types.
/plugin marketplace add buzzdan/ai-coding-rules/plugin install go-linter-driven-development@ai-coding-rulesThis skill inherits all available tools. When active, it can use any tool Claude has access to.
examples/grpc-bufconn.mdexamples/httptest-dsl.mdexamples/integration-patterns.mdexamples/jsonrpc-mock.mdexamples/nats-in-memory.mdexamples/system-patterns.mdexamples/test-organization.mdexamples/victoria-metrics.mdreference.mdReference: See reference.md for comprehensive testutils patterns and DSL examples.
</objective>
<quick_start>
Ready after tests? Run linter: task lintwithfix
</quick_start>
<when_to_use> <automatic_invocation>
<manual_invocation>
Prefer real implementations over mocks
Coverage targets
<test_pyramid> Three levels of testing, each serving a specific purpose:
<unit_tests level="base">
pkg_test package, test public API only
</unit_tests><integration_tests level="middle">
<system_tests level="top">
tests/ folder<reusable_infrastructure>
Build shared test infrastructure in internal/testutils/:
Dependency Priority (choose appropriate level):
Choose based on what you're testing, not dogmatically. In-memory is fastest but sometimes you need real services.
See reference.md for comprehensive testutils patterns and DSL examples. </reusable_infrastructure>
<workflow><unit_tests_workflow> Purpose: Test leaf types in isolation, 100% coverage target
Test structure:
See reference.md for detailed patterns and examples. </unit_tests_workflow>
<integration_tests_workflow> Purpose: Test seams between components, verify they work together
pkg_test or integration_test.go with build tagsFile organization:
//go:build integration
package user_test
// Test Service + Repository + real/mock dependencies
See reference.md for integration test patterns with dependencies. </integration_tests_workflow>
<system_tests_workflow> Purpose: Black box test entire system, critical end-to-end workflows
Example with in-memory mock:
// tests/cli_test.go - Testing CLI against mock API
func TestCLI_UserWorkflow(t *testing.T) {
mockAPI := testutils.NewMockServer().
OnGET("/users/1").RespondJSON(200, user).
Build() // In-memory httptest.Server
defer mockAPI.Close()
cmd := exec.Command("./myapp", "get-user", "1",
"--api-url", mockAPI.URL())
output, err := cmd.CombinedOutput()
// Assert on output
}
Example with binary executable:
// tests/integration_test.go - Testing against real service binary
func TestSystem_WithRealService(t *testing.T) {
// Start service binary in background
svc := exec.Command("./myservice", "--port", "8080")
svc.Start()
defer svc.Process.Kill()
// Wait for service to be ready
waitForHealthy(t, "http://localhost:8080/health")
// Run tests against real service
resp, err := http.Get("http://localhost:8080/api/users")
// Assert on response
}
See reference.md for comprehensive system test patterns including test-containers. </system_tests_workflow>
</workflow><key_patterns> Table-Driven Tests (Cyclomatic Complexity = 1):
// BAD - wantErr adds conditional, complexity > 1
tests := []struct {
input string
want string
wantErr bool // NEVER DO THIS
}{...}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Parse(tt.input)
if tt.wantErr { // <- Conditional! Complexity > 1
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.want, got)
}
})
}
// GOOD - Separate functions, complexity = 1
func TestParse_Success(t *testing.T) {
tests := []struct {
name string
input string
want string
}{...}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Parse(tt.input)
require.NoError(t, err) // No conditionals
require.Equal(t, tt.want, got)
})
}
}
func TestParse_Error(t *testing.T) {
tests := []struct {
name string
input string
}{...}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := Parse(tt.input)
require.Error(t, err) // No conditionals
})
}
}
Testify Suites:
Synchronization:
See reference.md for complete patterns with code examples. </key_patterns>
<output_format> After writing tests:
TESTING COMPLETE
Unit Tests:
- user/user_id_test.go: 100% (4 test cases)
- user/email_test.go: 100% (6 test cases)
- user/service_test.go: 100% (8 test cases)
Integration Tests:
- user/integration_test.go: 3 workflows tested
- Dependencies: In-memory DB, httptest mock server
System Tests:
- tests/cli_test.go: 2 end-to-end workflows (in-memory mocks)
- tests/api_test.go: 1 full API workflow (binary executable)
- tests/db_test.go: 1 database workflow (test-containers)
Test Infrastructure:
- internal/testutils/httpserver: In-memory mock API with DSL
- internal/testutils/mockdb: In-memory database mock
- internal/testutils/containers: Test-container helpers
Test Execution:
$ go test ./... # All tests (in-memory only)
$ go test -tags=integration ./... # Include integration tests
$ go test ./tests/... # System tests (may need containers)
All tests pass
100% coverage on leaf types
Next Steps:
1. Run linter: task lintwithfix
2. If linter fails → use @refactoring skill
3. If linter passes → use @pre-commit-review skill
</output_format>
<testing_checklist> <unit_tests_checklist>
<integration_tests_checklist>
//go:build integration)<system_tests_checklist>
<test_infrastructure_checklist>
See reference.md for complete testing guidelines and examples. </testing_checklist>
<success_criteria> Testing is complete when ALL of the following are true:
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.