Integrates LangChain4j with Spring Boot via auto-configuration, AI model beans, chat memory, RAG pipelines with Spring Data, and declarative AI services. Use for Java LLM apps and AI microservices.
From developer-kit-javanpx claudepluginhub giuseppe-trisciuoglio/developer-kit --plugin developer-kit-javaThis skill is limited to using the following tools:
references/configuration.mdreferences/examples.mdreferences/references.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Integrate LangChain4j with Spring Boot using declarative AI Services, auto-configuration, and Spring Boot starters. Configure AI model beans, set up chat memory, implement RAG pipelines with Spring Data, and build production-ready AI applications.
Use this skill when:
@Bean annotationsLangChain4j Spring Boot integration provides declarative AI Services through Spring Boot starters, enabling automatic configuration of AI components based on properties. Combine Spring dependency injection with LangChain4j's AI capabilities using interface-based definitions with annotations.
<!-- Core LangChain4j Spring Boot Starter -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>1.8.0</version>
</dependency>
<!-- OpenAI Spring Boot Starter -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>1.8.0</version>
</dependency>
# application.properties
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
langchain4j.open-ai.chat-model.temperature=0.7
langchain4j.open-ai.chat-model.timeout=PT60S
langchain4j.open-ai.chat-model.max-tokens=1000
Or using YAML:
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o-mini
temperature: 0.7
timeout: 60s
max-tokens: 1000
import dev.langchain4j.service.spring.AiService;
@AiService
public interface CustomerSupportAssistant {
@SystemMessage("You are a helpful customer support agent for TechCorp.")
String handleInquiry(String customerMessage);
@UserMessage("Translate to {{language}}: {{text}}")
String translate(String text, String language);
}
@SpringBootApplication
@ComponentScan(basePackages = {
"com.yourcompany",
"dev.langchain4j.service.spring"
})
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
@Service
public class CustomerService {
private final CustomerSupportAssistant assistant;
public CustomerService(CustomerSupportAssistant assistant) {
this.assistant = assistant;
}
public String processCustomerQuery(String query) {
return assistant.handleInquiry(query);
}
}
After setup, verify the configuration:
LangChain4jSpringBootAutoConfiguration activationCustomerSupportAssistant in Spring contextassistant.handleInquiry("test") and verify a response is returnedProperty-Based Configuration: Configure AI models through application.properties for different providers.
Manual Bean Configuration: For advanced configurations, define beans manually:
@Configuration
public class AiConfig {
@Bean
public ChatModel chatModel(@Value("${OPENAI_API_KEY}") String apiKey) {
return OpenAiChatModel.builder()
.apiKey(apiKey)
.modelName("gpt-4o-mini")
.temperature(0.7)
.build();
}
}
Multiple Providers: Use explicit wiring when configuring multiple AI providers:
@AiService(wiringMode = WiringMode.EXPLICIT)
interface MultiProviderAssistant {
@AiServiceAnnotation
ChatModel openAiModel;
@AiServiceAnnotation
ChatModel azureModel;
}
Basic AI Service: Create interfaces with @AiService annotation and define methods with message templates.
Streaming AI Service: Implement streaming responses using Project Reactor:
@AiService
public interface StreamingAssistant {
@SystemMessage("You are a helpful assistant.")
Flux<String> chatStream(String message);
}
Chat Memory: Set up conversation memory with Spring context:
@AiService
public interface ConversationalAssistant {
@SystemMessage("You are a helpful assistant with memory.")
String chat(@MemoryId String userId, String message);
}
Embedding Stores: Configure embedding stores for RAG pipelines with Spring Data:
@Configuration
public class RagConfig {
@Bean
public EmbeddingStore<TextSegment> embeddingStore() {
return PgVectorEmbeddingStore.builder()
.host("localhost")
.port(5432)
.database("vectordb")
.table("embeddings")
.dimension(1536)
.build();
}
@Bean
public EmbeddingModel embeddingModel() {
return OpenAiEmbeddingModel.withApiKey(System.getenv("OPENAI_API_KEY"));
}
}
@AiService
public interface RagAssistant {
String answer(@UserMessage("Question: {{question}}") String question);
}
Document Ingestion: Use ContentInjector and DocumentSplitter for processing documents.
Content Retrieval: Configure EmbeddingStoreContentRetriever for knowledge augmentation.
Spring Component Tools: Define tools as Spring components:
@Component
public class Calculator {
@Tool("Calculate the sum of two numbers")
public double add(double a, double b) {
return a + b;
}
}
@AiService
public interface MathAssistant {
String solve(String problem);
}
@AiService
public interface ChatAssistant {
@SystemMessage("You are a helpful assistant.")
String chat(String message);
}
@AiService
public interface ConversationalAssistant {
@SystemMessage("You are a helpful assistant with memory of conversations.")
String chat(@MemoryId String userId, String message);
}
@Component
public class WeatherService {
@Tool("Get weather for a city")
public String getWeather(String city) {
return "Sunny, 22°C in " + city;
}
}
@AiService
public interface WeatherAssistant {
String getWeatherForCity(String city);
}
For more examples (including RAG configurations, streaming assistants, and multi-provider setups), refer to references/examples.md.
For detailed API references and advanced configurations: