From rune
Builds complete MCP servers from specs or natural language, generating tools, resources, validation, tests, and docs in TypeScript (@modelcontextprotocol/sdk) or Python (FastMCP). Use for AI agent tool/resource access.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
MCP server builder. Generates complete, tested MCP servers from a natural language description or specification. Handles tool definitions, resource handlers, input validation, error handling, configuration, tests, and documentation. Supports TypeScript (official SDK) and Python (FastMCP).
Builds MCP servers in Python (FastMCP) or Node/TypeScript (MCP SDK) to expose third-party APIs as LLM tools. Handles scaffolding, adding tools, evaluations, and tool interface design.
Guides creating MCP servers enabling LLMs to interact with external services via tools. Use when building integrations in Python (FastMCP) or Node/TypeScript (MCP SDK).
Guides developers building MCP servers for Claude: interrogates use case, picks deployment (remote HTTP, MCPB, stdio), tool patterns, and hands off to scaffolding skills.
Share bugs, ideas, or general feedback.
MCP server builder. Generates complete, tested MCP servers from a natural language description or specification. Handles tool definitions, resource handlers, input validation, error handling, configuration, tests, and documentation. Supports TypeScript (official SDK) and Python (FastMCP).
cook when MCP-related task detected (keywords: "MCP server", "MCP tool", "model context protocol")scaffold when MCP Server template selected/rune mcp-builder <description> — manual invocationmcp.json, @modelcontextprotocol/sdk, or fastmcp in dependenciesba (L2): if user description is vague — elicit requirements for what tools/resources the server should exposeresearch (L3): look up target API documentation, existing MCP servers for referencetest (L2): generate and run test suite for the serverdocs (L2): generate server documentation (tool catalog, installation, configuration)verification (L3): verify server builds and tests passcook (L1): when MCP-related task detectedscaffold (L1): MCP Server template in Phase 5/rune mcp-builder direct invocationIf description is detailed enough (tools, resources, target API specified), proceed. If vague, ask targeted questions:
If user provides a detailed spec or existing API docs → extract answers, confirm.
Determine server structure based on spec:
TypeScript (default):
mcp-server-<name>/
├── src/
│ ├── index.ts — server entry point, tool/resource registration
│ ├── tools/
│ │ ├── <tool-name>.ts — one file per tool
│ │ └── index.ts — tool registry
│ ├── resources/
│ │ ├── <resource>.ts — one file per resource type
│ │ └── index.ts — resource registry
│ ├── lib/
│ │ ├── client.ts — external API client (if applicable)
│ │ └── types.ts — shared types
│ └── config.ts — environment variable validation
├── tests/
│ ├── tools/
│ │ └── <tool-name>.test.ts
│ └── resources/
│ └── <resource>.test.ts
├── package.json
├── tsconfig.json
├── .env.example
└── README.md
Python (FastMCP):
mcp-server-<name>/
├── src/
│ ├── server.py — FastMCP server with tool/resource decorators
│ ├── tools/
│ │ └── <tool_name>.py
│ ├── resources/
│ │ └── <resource>.py
│ ├── lib/
│ │ ├── client.py — external API client
│ │ └── types.py — Pydantic models
│ └── config.py — settings via pydantic-settings
├── tests/
│ ├── test_<tool_name>.py
│ └── test_<resource>.py
├── pyproject.toml
├── .env.example
└── README.md
For each tool:
TypeScript:
import { z } from 'zod';
export const toolName = {
name: 'tool_name',
description: 'What this tool does — used by AI to decide when to call it',
inputSchema: z.object({
param1: z.string().describe('Description for AI'),
param2: z.number().optional().describe('Optional parameter'),
}),
async handler(input: { param1: string; param2?: number }) {
// Implementation
return { content: [{ type: 'text', text: JSON.stringify(result) }] };
},
};
Python (FastMCP):
from fastmcp import FastMCP
mcp = FastMCP("server-name")
@mcp.tool()
async def tool_name(param1: str, param2: int | None = None) -> str:
"""What this tool does — used by AI to decide when to call it."""
# Implementation
return json.dumps(result)
For each resource:
Generate .env.example with all required environment variables:
# Required
API_KEY=your_api_key_here
API_BASE_URL=https://api.example.com
# Optional
LOG_LEVEL=info
CACHE_TTL=300
Generate config validation:
// config.ts
import { z } from 'zod';
const envSchema = z.object({
API_KEY: z.string().min(1, 'API_KEY is required'),
API_BASE_URL: z.string().url().default('https://api.example.com'),
LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
});
export const config = envSchema.parse(process.env);
Before generating tests, classify every tool as query or mutation:
| Category | Examples | Behavior |
|---|---|---|
query | read, list, search, get, fetch | Auto-approve — no confirmation needed |
mutation | create, update, delete, send, write, publish | Require user confirmation before execution |
Implementation rules:
safety metadata to each tool definition:export const deleteTool = {
name: 'delete_user',
description: '...',
safety: 'mutation' as const, // ← add this
inputSchema: z.object({ id: z.string() }),
async handler(input) { ... },
};
mutation tool, generate a preview step that surfaces WHAT WILL HAPPEN before the action runs:// In the handler, before executing:
if (tool.safety === 'mutation') {
return {
content: [{ type: 'text', text:
`⚠️ Will delete user "${user.name}" (ID: ${input.id}). This cannot be undone.\nConfirm? (yes/no)`
}],
requiresConfirmation: true,
};
}
// Proceed only after confirmation received
@confirm_mutation decorator or inline guard in the docstring:@mcp.tool()
async def delete_user(id: str) -> str:
"""[MUTATION] Delete a user by ID. Will prompt for confirmation before executing."""
...
🔒 badge on mutation tools).For each tool:
For each resource:
describe('tool_name', () => {
it('should return results for valid input', async () => {
const result = await toolName.handler({ param1: 'test' });
expect(result.content[0].type).toBe('text');
// Assert expected structure
});
it('should handle API errors gracefully', async () => {
// Mock API failure
const result = await toolName.handler({ param1: 'trigger-error' });
expect(result.isError).toBe(true);
});
});
Produce README.md with:
Claude Code installation snippet:
{
"mcpServers": {
"server-name": {
"command": "node",
"args": ["path/to/dist/index.js"],
"env": {
"API_KEY": "your_key"
}
}
}
}
Invoke rune:verification:
tsc --noEmit + npm testmypy src/ + pytestTypeScript:
mcp-server-<name>/
├── src/
│ ├── index.ts — server entry, tool/resource registration
│ ├── tools/<name>.ts — one file per tool (Zod input schema + handler)
│ ├── resources/<name>.ts — one file per resource (URI template + reader)
│ ├── lib/client.ts — external API client
│ ├── lib/types.ts — shared TypeScript interfaces
│ └── config.ts — env var validation (Zod schema)
├── tests/tools/<name>.test.ts — per-tool tests (happy, validation, error, edge)
├── tests/resources/<name>.test.ts
├── package.json, tsconfig.json, .env.example, README.md
Python (FastMCP):
mcp-server-<name>/
├── src/
│ ├── server.py — FastMCP server with @mcp.tool() decorators
│ ├── tools/<name>.py — tool implementations
│ ├── resources/<name>.py
│ ├── lib/client.py — external API client
│ ├── lib/types.py — Pydantic models
│ └── config.py — pydantic-settings
├── tests/test_<name>.py
├── pyproject.toml, .env.example, README.md
When the MCP server needs to call multiple AI providers (e.g., both Anthropic and OpenAI), use the Provider Adapter pattern to normalize different APIs behind a unified interface.
interface ProviderAdapter {
formatRequest(params: RequestParams): { url: string; init: RequestInit };
parseResponse(data: unknown): { content: string; usage: TokenUsage | null };
formatStreamRequest(params: RequestParams): { url: string; init: RequestInit };
parseSSEEvent(eventType: string, data: string): StreamChunk | null;
}
type StreamChunk =
| { type: "thinking"; content: string }
| { type: "text"; content: string }
| { type: "done" }
| { type: "done_with_usage"; usage: TokenUsage }
| { type: "usage_delta"; inputTokens?: number; outputTokens?: number }
| { type: "error"; message: string };
content_block_delta, message_start; OpenAI: response.output_text.delta, [DONE])TokenUsage typeWhen building MCP servers that call AI providers, support dual-model configuration — allow users to specify a primary model for critical operations and a cheaper model for background tasks (summarization, classification, metadata extraction). This avoids burning expensive API credits on tasks that don't need maximum quality.
// config.ts
const config = {
primaryModel: process.env.PRIMARY_MODEL || 'claude-sonnet-4-20250514',
backgroundModel: process.env.BACKGROUND_MODEL || 'claude-haiku-4-5-20251001',
};
| Failure Mode | Severity | Mitigation |
|---|---|---|
| Tool descriptions too vague for AI to use effectively | HIGH | Step 3: descriptions must explain WHEN to use the tool, not just WHAT it does |
| Missing input validation → server crashes on bad input | HIGH | Constraint 1: Zod/Pydantic validation on all inputs |
| Hardcoded API keys in generated code | CRITICAL | Constraint 3: always use env vars + .env.example |
| Tests mock everything → no real integration coverage | MEDIUM | Generate both unit tests (mocked) and integration test template (real API) |
| Generated server doesn't match MCP spec | HIGH | Use official SDK — don't hand-roll protocol handling |
| Installation docs only for Claude Code | LOW | Include Cursor/Windsurf config examples too |
| Mutation tool without confirmation gate | CRITICAL | Step 3.5: classify every tool — any write/delete/send without a preview+confirm step is a footgun |
| Artifact | Format | Location |
|---|---|---|
| MCP server source code | TypeScript or Python | mcp-server-<name>/src/ |
| Tool definitions (one per tool) | TS/Python files | src/tools/<name>.ts or .py |
| Resource handlers | TS/Python files | src/resources/<name>.ts or .py |
| Test suite | TS/Python test files | tests/ |
| README with tool catalog | Markdown | mcp-server-<name>/README.md |
| Environment config template | .env.example | project root |
~3000-6000 tokens input, ~2000-5000 tokens output. Sonnet — MCP server generation is a structured code task, not architectural reasoning.
Scope guardrail: mcp-builder generates the server and tests — it does not deploy, register with MCP registries, or configure the host IDE beyond providing the installation snippet.