Testing strategies for LangChain4j-powered applications. Mock LLM responses, test retrieval chains, and validate AI workflows. Use when testing AI-powered features reliably.
/plugin marketplace add giuseppe-trisciuoglio/developer-kit/plugin install developer-kit@giuseppe.trisciuoglioThis skill is limited to using the following tools:
references/advanced-testing.mdreferences/integration-testing.mdreferences/testing-dependencies.mdreferences/unit-testing.mdreferences/workflow-patterns.mdUse this skill when:
To test LangChain4J applications effectively, follow these key strategies:
Use mock models for fast, isolated testing of business logic. See references/unit-testing.md for detailed examples.
// Example: Mock ChatModel for unit tests
ChatModel mockModel = mock(ChatModel.class);
when(mockModel.generate(any(String.class)))
.thenReturn(Response.from(AiMessage.from("Mocked response")));
var service = AiServices.builder(AiService.class)
.chatModel(mockModel)
.build();
Setup proper Maven/Gradle dependencies for testing. See references/testing-dependencies.md for complete configuration.
Key dependencies:
langchain4j-test - Testing utilities and guardrail assertionstestcontainers - Integration testing with containerized servicesmockito - Mock external dependenciesassertj - Better assertionsTest with real services using Testcontainers. See references/integration-testing.md for container setup examples.
@Testcontainers
class OllamaIntegrationTest {
@Container
static GenericContainer<?> ollama = new GenericContainer<>(
DockerImageName.parse("ollama/ollama:latest")
).withExposedPorts(11434);
@Test
void shouldGenerateResponse() {
ChatModel model = OllamaChatModel.builder()
.baseUrl(ollama.getEndpoint())
.build();
String response = model.generate("Test query");
assertNotNull(response);
}
}
For streaming responses, memory management, and complex workflows, refer to references/advanced-testing.md.
Follow testing pyramid patterns and best practices from references/workflow-patterns.md.
@Test
void shouldProcessQueryWithMock() {
ChatModel mockModel = mock(ChatModel.class);
when(mockModel.generate(any(String.class)))
.thenReturn(Response.from(AiMessage.from("Test response")));
var service = AiServices.builder(AiService.class)
.chatModel(mockModel)
.build();
String result = service.chat("What is Java?");
assertEquals("Test response", result);
}
@Testcontainers
class RAGIntegrationTest {
@Container
static GenericContainer<?> ollama = new GenericContainer<>(
DockerImageName.parse("ollama/ollama:latest")
);
@Test
void shouldCompleteRAGWorkflow() {
// Setup models and stores
var chatModel = OllamaChatModel.builder()
.baseUrl(ollama.getEndpoint())
.build();
var embeddingModel = OllamaEmbeddingModel.builder()
.baseUrl(ollama.getEndpoint())
.build();
var store = new InMemoryEmbeddingStore<>();
var retriever = EmbeddingStoreContentRetriever.builder()
.chatModel(chatModel)
.embeddingStore(store)
.embeddingModel(embeddingModel)
.build();
// Test complete workflow
var assistant = AiServices.builder(RagAssistant.class)
.chatLanguageModel(chatModel)
.contentRetriever(retriever)
.build();
String response = assistant.chat("What is Spring Boot?");
assertNotNull(response);
assertTrue(response.contains("Spring"));
}
}
@BeforeEach and @AfterEach for setup/teardownFor comprehensive testing guides and API references, see the included reference documents:
// For fast unit tests
ChatModel mockModel = mock(ChatModel.class);
when(mockModel.generate(anyString())).thenReturn(Response.from(AiMessage.from("Mocked")));
// For specific responses
when(mockModel.generate(eq("Hello"))).thenReturn(Response.from(AiMessage.from("Hi")));
when(mockModel.generate(contains("Java"))).thenReturn(Response.from(AiMessage.from("Java response")));
// Use test-specific profiles
@TestPropertySource(properties = {
"langchain4j.ollama.base-url=http://localhost:11434"
})
class TestConfig {
// Test with isolated configuration
}
// Custom assertions for AI responses
assertThat(response).isNotNull().isNotEmpty();
assertThat(response).containsAll(expectedKeywords);
assertThat(response).doesNotContain("error");
@Timeout for external service calls70% Unit Tests
├─ Business logic validation
├─ Guardrail testing
├─ Mock tool execution
└─ Edge case handling
20% Integration Tests
├─ Testcontainers with Ollama
├─ Vector store testing
├─ RAG workflow validation
└─ Performance benchmarking
10% End-to-End Tests
├─ Complete user journeys
├─ Real model interactions
└─ Performance under load
spring-boot-test-patternsunit-test-service-layerunit-test-boundary-conditionsThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.