npx claudepluginhub charleswiltgen/axiom --plugin axiomThis skill uses the workspace's default tool permissions.
**You MUST use this skill for ANY Apple Intelligence or Foundation Models work.**
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
You MUST use this skill for ANY Apple Intelligence or Foundation Models work.
Use this router when:
First, determine which kind of AI the developer needs:
| Developer Intent | Route To |
|---|---|
| On-device text generation (Apple Intelligence) | Stay here → Foundation Models skills |
| Custom ML model deployment (PyTorch, TensorFlow) | Route to ios-ml → CoreML conversion, compression |
| Computer vision (image analysis, OCR, segmentation) | Route to ios-vision → Vision framework |
| Cloud API integration (OpenAI, etc.) | Route to ios-networking → URLSession patterns |
| System AI features (Writing Tools, Genmoji) | No custom code needed — these are system-provided |
Key boundary: ios-ai vs ios-ml
Foundation Models + concurrency (session blocking main thread, UI freezes):
await or running on @MainActorFoundation Models + data (@Generable decoding errors, structured output issues):
Implementation patterns → /skill axiom-foundation-models
API reference → /skill axiom-foundation-models-ref
Diagnostics → /skill axiom-foundation-models-diag
| Thought | Reality |
|---|---|
| "Foundation Models is just LanguageModelSession" | Foundation Models has @Generable, Tool protocol, streaming, and guardrails. foundation-models covers all. |
| "I'll figure out the AI patterns as I go" | AI APIs have specific error handling and fallback requirements. foundation-models prevents runtime failures. |
| "I've used LLMs before, this is similar" | Apple's on-device models have unique constraints (guardrails, context limits). foundation-models is Apple-specific. |
foundation-models:
foundation-models-diag:
User: "How do I use Apple Intelligence to generate structured data?"
→ Invoke: /skill axiom-foundation-models
User: "My AI generation is being blocked"
→ Invoke: /skill axiom-foundation-models-diag
User: "Show me @Generable examples"
→ Invoke: /skill axiom-foundation-models-ref
User: "Implement streaming AI generation"
→ Invoke: /skill axiom-foundation-models
User: "I want to add AI to my app" → First ask: Apple Intelligence (Foundation Models) or custom ML model? Route accordingly.
User: "My Foundation Models session is blocking the UI"
→ Invoke: /skill axiom-foundation-models (async patterns) + also invoke ios-concurrency if needed
User: "I want to run my PyTorch model on device"
→ Route to: ios-ml router (CoreML conversion, not Foundation Models)