From amplify
Builds MCP servers in Python (FastMCP) or Node/TypeScript (MCP SDK) to expose third-party APIs as LLM tools. Handles scaffolding, adding tools, evaluations, and tool interface design.
npx claudepluginhub wunki/amplify --plugin ask-questions-if-underspecifiedThis skill uses the workspace's default tool permissions.
This skill produces high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services. An MCP server provides tools that allow LLMs to access external services and APIs. Quality is measured by how well those tools enable LLMs to accomplish real-world tasks.
Guides creating MCP servers enabling LLMs to interact with external services via tools. Use when building integrations in Python (FastMCP) or Node/TypeScript (MCP SDK).
Guides building high-quality MCP servers in Python (FastMCP) or Node/TypeScript (MCP SDK) for LLMs to interact with external APIs via agent-optimized tools.
Guides building high-quality MCP servers enabling LLMs to interact with external services via tools. Use when developing servers to integrate APIs in Python (FastMCP) or Node/TypeScript (MCP SDK).
Share bugs, ideas, or general feedback.
This skill produces high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services. An MCP server provides tools that allow LLMs to access external services and APIs. Quality is measured by how well those tools enable LLMs to accomplish real-world tasks.
Creating an MCP server involves four phases. Phases 1-3 apply when building or extending a server. Phase 4 applies when creating evaluations for an existing server.
Ask the user which language to use via AskUserQuestion (single-select) if not already specified:
All steps below branch by language where relevant.
Design tools for AI agents, not human developers. Apply these principles:
Build for Workflows, Not Just API Endpoints:
schedule_event checks availability and creates the event in one call)Optimize for Limited Context:
concise vs detailed response format optionsDesign Actionable Error Messages:
Follow Natural Task Subdivisions:
service_ prefixesUse Evaluation-Driven Development:
Fetch the latest MCP protocol specification using WebFetch: https://modelcontextprotocol.io/llms-full.txt
If the URL is unreachable, continue using reference/mcp_best_practices.md as the primary reference.
Read reference/mcp_best_practices.md for universal MCP guidelines. Skip if already loaded this session.
For Python: also fetch https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md and read reference/python_mcp_server.md. If the URL is unreachable, use the local guide only.
For Node/TypeScript: also fetch https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md and read reference/node_mcp_server.md. If the URL is unreachable, use the local guide only.
Read through all available documentation for the API to integrate:
Use WebSearch and WebFetch as needed.
Write a plan covering:
Tool Selection: List the most valuable endpoints to implement. Prioritize tools that enable complete workflows. Identify which tools work together.
Shared Utilities: Common API request patterns, pagination helpers, filtering and formatting utilities, error handling strategy, authentication management.
Input/Output Design: Pydantic models (Python) or Zod schemas (TypeScript) for input validation. Consistent response formats (JSON and Markdown). Character limits and truncation (25,000 characters).
Error Handling: Graceful failure modes. Clear, actionable, LLM-friendly error messages. Rate limiting and timeout handling. Authentication/authorization error handling.
For Python: create a single .py file, or organize into modules if complex. See reference/python_mcp_server.md for patterns.
For Node/TypeScript: create package.json, tsconfig.json, and source directory. See reference/node_mcp_server.md for project structure.
Create shared utilities before implementing tools:
For each tool in the plan:
Define Input Schema:
Write Comprehensive Docstrings/Descriptions:
Implement Tool Logic:
Add Tool Annotations:
readOnlyHint: true for read-only operationsdestructiveHint: false for non-destructive operationsidempotentHint: true if repeated calls have the same effectopenWorldHint: true if interacting with external systemsRead reference/python_mcp_server.md when implementing in Python. Skip if already loaded.
Confirm the implementation follows:
@mcp.tool registrationmodel_configCHARACTER_LIMIT, API_BASE_URL)Read reference/node_mcp_server.md when implementing in Node/TypeScript. Skip if already loaded.
Confirm the implementation follows:
server.registerTool for tool registration.strict()any typesPromise<T> return typesnpm run build)Review the code for:
MCP servers are long-running processes. Running them directly in the main process will hang indefinitely. Use safe testing approaches:
python -m py_compile your_server.pynpm run build and verify dist/index.js is createdAvoid running python server.py or node dist/index.js directly without a timeout or separate shell.
For Python: see "Quality Checklist" in reference/python_mcp_server.md.
For Node/TypeScript: see "Quality Checklist" in reference/node_mcp_server.md.
Apply this phase after implementing a server, or when the user asks to write evaluations for an existing MCP server. Skip Phases 1-3 if the server already exists.
Read reference/evaluation.md for complete evaluation guidelines. The summary below covers the key requirements.
Evaluations test whether LLMs can effectively use the MCP server to answer realistic, complex questions using only the tools provided.
Follow the process in reference/evaluation.md:
Each question must be:
Create an XML file:
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
</evaluation>