Use when implementing ANY Apple Intelligence or on-device AI feature. Covers Foundation Models, @Generable, LanguageModelSession, structured output, Tool protocol, iOS 26 AI integration.
Routes iOS developers to the correct Apple Intelligence or Foundation Models implementation patterns and troubleshooting skills.
npx claudepluginhub charleswiltgen/axiomThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You MUST use this skill for ANY Apple Intelligence or Foundation Models work.
Use this router when:
First, determine which kind of AI the developer needs:
| Developer Intent | Route To |
|---|---|
| On-device text generation (Apple Intelligence) | Stay here → Foundation Models skills |
| Custom ML model deployment (PyTorch, TensorFlow) | Route to ios-ml → CoreML conversion, compression |
| Computer vision (image analysis, OCR, segmentation) | Route to ios-vision → Vision framework |
| Cloud API integration (OpenAI, etc.) | Route to ios-networking → URLSession patterns |
| System AI features (Writing Tools, Genmoji) | No custom code needed — these are system-provided |
Key boundary: ios-ai vs ios-ml
Foundation Models + concurrency (session blocking main thread, UI freezes):
await or running on @MainActorFoundation Models + data (@Generable decoding errors, structured output issues):
Implementation patterns → /skill axiom-foundation-models
API reference → /skill axiom-foundation-models-ref
Diagnostics → /skill axiom-foundation-models-diag
| Thought | Reality |
|---|---|
| "Foundation Models is just LanguageModelSession" | Foundation Models has @Generable, Tool protocol, streaming, and guardrails. foundation-models covers all. |
| "I'll figure out the AI patterns as I go" | AI APIs have specific error handling and fallback requirements. foundation-models prevents runtime failures. |
| "I've used LLMs before, this is similar" | Apple's on-device models have unique constraints (guardrails, context limits). foundation-models is Apple-specific. |
foundation-models:
foundation-models-diag:
User: "How do I use Apple Intelligence to generate structured data?"
→ Invoke: /skill axiom-foundation-models
User: "My AI generation is being blocked"
→ Invoke: /skill axiom-foundation-models-diag
User: "Show me @Generable examples"
→ Invoke: /skill axiom-foundation-models-ref
User: "Implement streaming AI generation"
→ Invoke: /skill axiom-foundation-models
User: "I want to add AI to my app" → First ask: Apple Intelligence (Foundation Models) or custom ML model? Route accordingly.
User: "My Foundation Models session is blocking the UI"
→ Invoke: /skill axiom-foundation-models (async patterns) + also invoke ios-concurrency if needed
User: "I want to run my PyTorch model on device"
→ Route to: ios-ml router (CoreML conversion, not Foundation Models)
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.