šØ EXECUTION NOTICE FOR CLAUDE
When you invoke this command via SlashCommand, the system returns THESE INSTRUCTIONS below.
YOU are the executor. This is NOT an autonomous subprocess.
- ā
The phases below are YOUR execution checklist
- ā
YOU must run each phase immediately using tools (Bash, Read, Write, Edit, TodoWrite)
- ā
Complete ALL phases before considering this command done
- ā DON't wait for "the command to complete" - YOU complete it by executing the phases
- ā DON't treat this as status output - it IS your instruction set
Immediately after SlashCommand returns, start executing Phase 0, then Phase 1, etc.
See @CLAUDE.md section "SlashCommand Execution - YOU Are The Executor" for detailed explanation.
Security Requirements
CRITICAL: All generated files must follow security rules:
@docs/security/SECURITY-RULES.md
Key requirements:
- Never hardcode API keys or secrets
- Use placeholders:
your_service_key_here
- Protect
.env files with .gitignore
- Create
.env.example with placeholders only
- Document key acquisition for users
You are tasked with helping the user create a new Vercel AI SDK application. Follow these steps in order:
Available Skills
This commands has access to the following skills from the vercel-ai-sdk plugin:
- SKILLS-OVERVIEW.md
- agent-workflow-patterns: AI agent workflow patterns including ReAct agents, multi-agent systems, loop control, tool orchestration, and autonomous agent architectures. Use when building AI agents, implementing workflows, creating autonomous systems, or when user mentions agents, workflows, ReAct, multi-step reasoning, loop control, agent orchestration, or autonomous AI.
- generative-ui-patterns: Generative UI implementation patterns for AI SDK RSC including server-side streaming components, dynamic UI generation, and client-server coordination. Use when implementing generative UI, building AI SDK RSC, creating streaming components, or when user mentions generative UI, React Server Components, dynamic UI, AI-generated interfaces, or server-side streaming.
- provider-config-validator: Validate and debug Vercel AI SDK provider configurations including API keys, environment setup, model compatibility, and rate limiting. Use when encountering provider errors, authentication failures, API key issues, missing environment variables, model compatibility problems, rate limiting errors, or when user mentions provider setup, configuration debugging, or SDK connection issues.
- rag-implementation: RAG (Retrieval Augmented Generation) implementation patterns including document chunking, embedding generation, vector database integration, semantic search, and RAG pipelines. Use when building RAG systems, implementing semantic search, creating knowledge bases, or when user mentions RAG, embeddings, vector database, retrieval, document chunking, or knowledge retrieval.
- testing-patterns: Testing patterns for Vercel AI SDK including mock providers, streaming tests, tool calling tests, snapshot testing, and test coverage strategies. Use when implementing tests, creating test suites, mocking AI providers, or when user mentions testing, mocks, test coverage, AI testing, streaming tests, or tool testing.
To use a skill:
!{skill skill-name}
Use skills when you need:
- Domain-specific templates and examples
- Validation scripts and automation
- Best practices and patterns
- Configuration generators
Skills provide pre-built resources to accelerate your work.
Step 1: Fetch Latest Documentation (DO THIS FIRST)
Use WebFetch to read the official documentation NOW before asking any questions. Fetch documentation progressively throughout the setup process to get the most relevant, up-to-date information.
Initial Documentation (fetch in parallel):
- Use WebFetch to read: https://sdk.vercel.ai/docs/introduction
- Use WebFetch to read: https://sdk.vercel.ai/docs/getting-started
- Use WebFetch to read: https://sdk.vercel.ai/docs/ai-sdk-core/overview
- Use WebFetch to read: https://sdk.vercel.ai/docs/ai-sdk-ui/overview
CRITICAL: Do NOT skip these WebFetch calls. Fetch them in parallel. The documentation may have changed since your training data. Only after fetching these docs should you proceed to Step 2.
Step 2: Gather Requirements
IMPORTANT: Ask these questions one at a time. Wait for the user's response before asking the next question. This makes it easier for the user to respond.
Ask the questions in this order (skip any that the user has already provided via arguments):
-
Language (ask first): "Would you like to use TypeScript, JavaScript, or Python?"
- Wait for response before continuing
-
Project name (ask second): "What would you like to name your project?"
- If $ARGUMENTS is provided, use that as the project name and skip this question
- Wait for response before continuing
-
Framework choice (ask third): "Which framework would you like to use?
- Next.js (React framework with App Router support)
- React (standalone with Vite)
- Node.js (backend/API only)
- Python (backend/API with FastAPI or Flask)
- Svelte (with SvelteKit)
- Vue (with Nuxt or Vite)"
- Wait for response before continuing
-
AI Provider (ask fourth): "Which AI provider would you like to use?
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini)
- Multiple providers (configure several)"
- Wait for response before continuing
-
Features (ask fifth): "What features do you need? (Select all that apply)
- Text streaming (real-time AI responses)
- Tool/Function calling (AI can call your functions)
- Multi-modal (text, images, files)
- Chat history management
- Rate limiting and caching"
- Wait for response before continuing
-
Tooling choice (ask sixth): Let the user know what tools you'll use, and confirm with them that these are the tools they want to use (for example, they may prefer pnpm or bun over npm). Respect the user's preferences when executing on the requirements.
After all questions are answered, proceed to fetch additional documentation based on their choices and create the setup plan.
Step 3: Fetch Feature-Specific Documentation
Based on the user's feature selections, fetch relevant documentation:
If Text Streaming selected:
If Tool Calling selected:
If Multi-modal selected:
If Chat History selected:
For provider-specific setup, fetch:
Setup Plan
Based on the user's answers, create a plan that includes:
-
Project initialization:
- Create project directory (if it doesn't exist)
- Initialize framework and package manager:
- Next.js:
npx create-next-app@latest or manual setup with TypeScript
- React:
npm create vite@latest with React + TypeScript template
- Node.js:
npm init -y and setup package.json with type: "module" and scripts
- Python: Create
requirements.txt or use poetry init
- Svelte:
npm create svelte@latest
- Vue:
npm create vue@latest
- Add necessary configuration files based on framework
-
Check for Latest Versions:
-
SDK Installation:
- TypeScript/JavaScript:
npm install ai@latest (or specify latest version)
- Install provider SDKs based on selections:
- OpenAI:
npm install @ai-sdk/openai
- Anthropic:
npm install @ai-sdk/anthropic
- Google:
npm install @ai-sdk/google
- Python:
pip install ai or appropriate Python packages
- After installation, verify the installed versions
-
Create starter files:
- Create appropriate entry points based on framework:
- Next.js: Create API route in
app/api/chat/route.ts and UI component
- React: Create components in
src/ with example chat interface
- Node.js: Create
index.ts or src/index.ts with API endpoints
- Python: Create
main.py with FastAPI or Flask endpoints
- Include proper imports and error handling
- Use modern, up-to-date syntax and patterns from the latest SDK version
- Implement selected features (streaming, tool calling, etc.)
-
Environment setup:
- Create a
.env.example file with required API keys:
OPENAI_API_KEY=your_openai_key_here (if using OpenAI)
ANTHROPIC_API_KEY=your_anthropic_key_here (if using Anthropic)
GOOGLE_GENERATIVE_AI_API_KEY=your_google_key_here (if using Google)
- Create
.env.local (for Next.js) or .env with placeholder values
- Add
.env.local and .env to .gitignore
- Explain how to get API keys from respective providers
-
Feature Implementation:
- If Text Streaming: Implement streaming with
streamText() or useChat() hook
- If Tool Calling: Set up tools with proper schemas and handlers
- If Multi-modal: Configure file/image handling
- If Chat History: Implement message storage and retrieval
- If Rate Limiting: Add rate limiting middleware or configuration
-
UI Components (if applicable):
- Create chat interface components
- Add loading states and error handling
- Implement streaming UI updates
- Style with Tailwind CSS (if available) or basic CSS
Implementation
After gathering requirements and getting user confirmation on the plan:
- Check for latest package versions using WebSearch or WebFetch
- Execute the setup steps
- Create all necessary files
- Install dependencies (always use latest stable versions)
- Verify installed versions and inform the user
- Create a working example based on their selections
- Add helpful comments in the code explaining what each part does
- VERIFY THE CODE WORKS BEFORE FINISHING:
- For TypeScript:
- Run
npx tsc --noEmit to check for type errors
- Fix ALL type errors until types pass completely
- Ensure imports and types are correct
- Only proceed when type checking passes with no errors
- For JavaScript:
- Verify imports are correct
- Check for basic syntax errors
- For Python:
- Verify imports are correct
- Run basic linting if available
- DO NOT consider the setup complete until the code verifies successfully
Verification
After all files are created and dependencies are installed, invoke the appropriate verifier agent to validate that the Vercel AI SDK application is properly configured and ready for use:
- For TypeScript projects: Invoke the vercel-ai-verifier-ts agent to validate the setup
- For JavaScript projects: Invoke the vercel-ai-verifier-js agent to validate the setup
- For Python projects: Invoke the vercel-ai-verifier-py agent to validate the setup
- The agent will check SDK usage, configuration, functionality, and adherence to official documentation
- Review the verification report and address any issues
Getting Started Guide
Once setup is complete and verified, provide the user with:
-
Next steps:
- How to set their API key(s)
- How to run their application:
- Next.js:
npm run dev (opens on http://localhost:3000)
- React:
npm run dev (Vite dev server)
- Node.js:
npm start or node --loader ts-node/esm index.ts
- Python:
python main.py or uvicorn main:app --reload
-
Useful resources:
-
Common next steps:
- How to customize prompts and model parameters
- How to add custom tools/functions
- How to implement authentication
- How to deploy to Vercel or other platforms
- How to add chat history persistence
- How to implement rate limiting
-
Testing the application:
- Provide example prompts to test
- Show how to test tool calling (if enabled)
- Demonstrate streaming behavior
Important Notes
- ALWAYS USE LATEST VERSIONS: Before installing any packages, check for the latest versions using WebSearch or by checking npm/PyPI directly
- FETCH DOCS PROGRESSIVELY: Don't fetch all docs at once. Fetch relevant documentation based on user's choices throughout the process
- VERIFY CODE RUNS CORRECTLY:
- For TypeScript: Run
npx tsc --noEmit and fix ALL type errors before finishing
- For JavaScript: Verify syntax and imports are correct
- For Python: Verify syntax and imports are correct
- Do NOT consider the task complete until the code passes verification
- Verify the installed versions after installation and inform the user
- Check the official documentation for any version-specific requirements (Node.js version, Python version, etc.)
- Always check if directories/files already exist before creating them
- Use the user's preferred package manager (npm, yarn, pnpm, bun for TypeScript/JavaScript; pip, poetry for Python)
- Ensure all code examples are functional and include proper error handling
- Use modern syntax and patterns that are compatible with the latest SDK version
- Make the experience interactive and educational
- ASK QUESTIONS ONE AT A TIME - Do not ask multiple questions in a single response
- PROGRESSIVE DOCUMENTATION: Fetch docs as needed based on user selections, not all at once
Begin by fetching the initial documentation, then ask the FIRST requirement question only. Wait for the user's answer before proceeding to the next question.