Generates LangChain4j tool calling patterns: annotates methods with @Tool and @P, registers tools via AiServices, validates parameters, handles errors. For AI agents integrating external APIs.
From developer-kit-javanpx claudepluginhub giuseppe-trisciuoglio/developer-kit --plugin developer-kit-javaThis skill is limited to using the following tools:
references/advanced-features.mdreferences/core-patterns.mdreferences/error-handling.mdreferences/examples.mdreferences/implementation-patterns.mdreferences/integration-examples.mdreferences/references.mdreferences/setup-configuration.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Provides patterns for annotating methods as tools, configuring tool executors, registering tools with AI services, validating parameters, and handling tool execution errors in LangChain4j applications.
LangChain4j uses the @Tool annotation to expose Java methods as callable functions for AI agents. The AiServices builder registers tools with a chat model, enabling LLMs to perform actions beyond text generation: database queries, API calls, calculations, and business system integrations. Parameters use @P for descriptions that guide the LLM.
@Tool, @P annotations)AiServices.builder().tools()@ToolMemoryId@ToolDefine a tool class with methods annotated @Tool. Provide a description as the first parameter. Use @P for each parameter description.
public class WeatherTools {
private final WeatherService weatherService;
public WeatherTools(WeatherService weatherService) {
this.weatherService = weatherService;
}
@Tool("Get current weather for a city")
public String getWeather(
@P("City name") String city,
@P("Temperature unit: celsius or fahrenheit") String unit) {
return weatherService.getWeather(city, unit);
}
}
Validate: Create an instance and confirm the class loads without errors.
Use AiServices.builder() to register tool instances with the chat model.
MathAssistant assistant = AiServices.builder(MathAssistant.class)
.chatModel(chatModel)
.tools(new Calculator(), new WeatherTools(weatherService))
.build();
Validate: Call assistant.chat("What is 2 + 2?") and verify the LLM responds without throwing.
Send a prompt that triggers tool usage and verify the tool executes and its result is incorporated.
String response = assistant.chat("What is the weather in Rome?");
System.out.println(response);
Validate: Check logs for tool invocation and confirm the response uses the tool output.
Add error handlers to gracefully manage failures without exposing stack traces.
AiServices.builder(Assistant.class)
.chatModel(chatModel)
.tools(new ExternalServiceTools())
.toolExecutionErrorHandler((request, exception) -> {
logger.error("Tool '{}' failed: {}", request.name(), exception.getMessage());
return "An error occurred while processing your request";
})
.hallucinatedToolNameStrategy(request ->
ToolExecutionResultMessage.from(request,
"Error: tool '" + request.name() + "' does not exist"))
.toolArgumentsErrorHandler((error, context) ->
ToolErrorHandlerResult.text("Invalid arguments: " + error.getMessage()))
.build();
Validate: Trigger an error condition and confirm the LLM receives a safe error message.
Enable concurrent tool execution and set timeouts for long-running tools.
AiServices.builder(Assistant.class)
.chatModel(chatModel)
.tools(new DbTools(), new HttpTools())
.executeToolsConcurrently(Executors.newFixedThreadPool(5))
.toolExecutionTimeout(Duration.ofSeconds(30))
.build();
Validate: Run concurrent requests and confirm no thread contention or deadlocks.
public class Calculator {
@Tool("Perform basic arithmetic")
public double calculate(
@P("Expression like 2+2 or 10*5") String expression) {
// Parse and evaluate expression
return eval(expression);
}
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(ChatModel.builder()
.apiKey(System.getenv("API_KEY"))
.model("gpt-4o")
.build())
.tools(new Calculator())
.build();
@Tool(value = "Send email notification", returnBehavior = ReturnBehavior.IMMEDIATELY)
public void sendEmail(@P("Recipient email address") String to,
@P("Email subject") String subject,
@P("Email body") String body) {
emailService.send(to, subject, body);
}
ToolProvider provider = request -> {
if (request.userContext().contains("admin")) {
return List.of(new AdminTools());
}
return List.of(new UserTools());
};
AiServices.builder(Assistant.class)
.chatModel(chatModel)
.toolProvider(provider)
.build();
@Tool names: Use imperative verbs ("Get", "Send", "Calculate") with clear scope@P descriptions: Include format, constraints, and valid values — vague descriptions cause incorrect LLM calls.toolExecutionTimeout() for external service calls.executeToolsConcurrently() when tools are independent| Issue | Solution |
|---|---|
| LLM calls non-existent tool | Add .hallucinatedToolNameStrategy() returning a safe error message |
| Tools receive wrong parameters | Refine @P descriptions; add .toolArgumentsErrorHandler() |
| Tool execution hangs | Set .toolExecutionTimeout(Duration.ofSeconds(N)) |
| Rate limit errors from external API | Add retry logic or rate limiter inside the tool method |
| LLM ignores tool output | Ensure the tool returns a string the LLM can interpret |
See references/error-handling.md for resilience patterns and references/core-patterns.md for parameter and return type details.
| Annotation / API | Purpose |
|---|---|
@Tool | Marks a method as a callable tool |
@P | Describes a tool parameter for the LLM |
@ToolMemoryId | Injects conversation/user ID into the tool |
AiServices.builder() | Creates AI service with registered tools |
ReturnBehavior.IMMEDIATELY | Execute tool without waiting for LLM response |
ToolProvider | Dynamic tool provisioning based on context |
executeToolsConcurrently() | Run independent tool calls in parallel |
toolExecutionTimeout() | Timeout for individual tool calls |
@Tool or @P descriptionsToolProvider for conditional registration@P descriptions directly cause incorrect tool calls — be specific about formats and constraintsexecuteToolsConcurrently()langchain4j-ai-services-patterns — High-level AI service configurationlangchain4j-rag-implementation-patterns — RAG retrieval with tool integrationlangchain4j-spring-boot-integration — Tool registration in Spring Boot applications