From java-spring
Integrates Spring AI or LangChain4J into Spring Boot projects for AI features like chatbots, RAG, vector stores, streaming LLM responses, and tool calls.
npx claudepluginhub ducpm2303/claude-java-plugins --plugin java-springThis skill is limited to using the following tools:
Detect the framework in use, then apply the correct patterns.
Integrates LangChain4j with Spring Boot via auto-configuration, AI model beans, chat memory, RAG pipelines with Spring Data, and declarative AI services. Use for Java LLM apps and AI microservices.
Sets up Dokimos evaluation for Spring AI apps including ChatClient, RAG pipelines, and advisor chains. Use for Spring Boot LLM testing and benchmarking.
Generates Spring Boot projects interactively via Spring Initializr API: fetches metadata, selects Boot/Java versions, language, artifact/package, dependencies; downloads, extracts, and runs Gradle Kotlin DSL build. Activates on Spring Boot creation requests.
Share bugs, ideas, or general feedback.
Detect the framework in use, then apply the correct patterns.
Check pom.xml or build.gradle:
spring-ai-* dependency → Spring AI (note version: 1.0.x GA or 0.8.x milestone)langchain4j-* dependency → LangChain4J (note version: 0.x or 1.x)Check Spring Boot version:
reviewUser asks to review existing AI code. Check for:
Spring AI:
ChatClient built via ChatClient.Builder (not raw ChatModel) for fluent APIPromptTemplate with variables — no string concatenationstream().content() or Flux<String> — not blocking .call() for real-time responses@Retryable or Spring AI retry config on ChatClient calls — LLMs are flakyspring.ai.openai.api-key) come from env vars or Vault, never hardcodedVectorStore queries use SearchRequest.query(text).withTopK(n) — not raw SQLQuestionAnswerAdvisor) attached to ChatClient — not manual context injectionLangChain4J:
@AiService interface — not ChatLanguageModel.generate() directly@SystemMessage annotation — not hardcoded stringsMessageWindowChatMemory or TokenWindowChatMemory — not unlimited historyStreamingChatLanguageModel with TokenStream — not blockingEmbeddingModel + EmbeddingStore for RAG — not in-memory list search@Tool on service methods — not manual function dispatch@Value("${langchain4j.openai.api-key}") — never literalchatUser asks to add a basic chatbot or chat endpoint.
references/patterns.md → Spring AI Setup)ChatClient.Builder, build a ChatClient beanChatController with @PostMapping("/chat")chatClient.prompt().user(message).call().content() for simple responseFlux<String> with chatClient.prompt().user(message).stream().content()ANTHROPIC_API_KEY / OPENAI_API_KEY to application.yml via ${env-var}langchain4j-spring-boot-starter + provider dependency@AiService interface with @SystemMessageAiServices.builder(MyAssistant.class).chatLanguageModel(model).build()@RestControllerragUser asks to implement RAG (chat over documents, knowledge base, semantic search).
references/patterns.md)spring-ai-{store}-store-spring-boot-starterDocumentReader (PDF, text, web) → TokenTextSplitter → VectorStore.add()ApplicationRunner or dedicated @PostMapping("/ingest")QuestionAnswerAdvisor(vectorStore) to ChatClientSearchRequest.withTopK(5).withSimilarityThreshold(0.7)EmbeddingStore (Chroma, Qdrant, in-memory for dev)EmbeddingStoreIngestor with DocumentSplitter and EmbeddingModelEmbeddingStoreContentRetriever → RetrievalAugmentor → AiServices buildertoolsUser asks to give the AI the ability to call Java methods (function/tool calling).
@Bean of type Function<Input, Output> — Spring AI auto-registers it@Description on a record parameter for rich schemaChatClient: .options(OpenAiChatOptions.builder().withFunction("myFunction").build())@Tool("description of what this tool does")AiServices.builder(...).tools(myToolService).build()memoryUser asks to add conversation memory / chat history.
MessageChatMemoryAdvisor with InMemoryChatMemory for single-instance appsJdbcChatMemory for persistent / multi-instance memory (requires spring-ai-jdbc store)conversationId (e.g., session ID or user ID) to scope memory per userMessageWindowChatMemory.withMaxMessages(20) — keeps last N messagesTokenWindowChatMemory — keeps messages within token budgetChatMemoryStore backed by Redis or JDBCFor review mode: list findings as [CRITICAL] / [HIGH] / [MEDIUM] / [LOW] with file:line references.
For implementation modes (chat, rag, tools, memory):
application.yml configurationAlways note version-specific differences:
AiServices API changed in 1.x