npx claudepluginhub charleswiltgen/axiom --plugin axiomThis skill uses the workspace's default tool permissions.
**You MUST use this skill for ANY on-device machine learning or speech-to-text work.**
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
You MUST use this skill for ANY on-device machine learning or speech-to-text work.
Use this router when:
ios-ml vs ios-ai — know the difference:
| Developer Intent | Router |
|---|---|
| "Use Apple Intelligence / Foundation Models" | ios-ai — Apple's on-device LLM |
| "Run my own ML model on device" | ios-ml — CoreML conversion + deployment |
| "Add text generation with @Generable" | ios-ai — Foundation Models structured output |
| "Deploy a custom LLM with KV-cache" | ios-ml — Custom model optimization |
| "Use Vision framework for image analysis" | ios-vision — Not ML deployment |
| "Use pre-trained Apple NLP models" | ios-ai — Apple's models, not custom |
Rule of thumb: If the developer is converting/compressing/deploying their own model → ios-ml. If they're using Apple's built-in AI → ios-ai. If they're doing computer vision → ios-vision.
Implementation patterns → /skill coreml
API reference → /skill coreml-ref
Diagnostics → /skill coreml-diag
Implementation patterns → /skill speech
| Thought | Reality |
|---|---|
| "CoreML is just load and predict" | CoreML has compression, stateful models, compute unit selection, and async prediction. coreml covers all. |
| "My model is small, no optimization needed" | Even small models benefit from compute unit selection and async prediction. coreml has the patterns. |
| "I'll just use SFSpeechRecognizer" | iOS 26 has SpeechAnalyzer with better accuracy and offline support. speech skill covers the modern API. |
coreml:
coreml-diag:
speech:
User: "How do I convert a PyTorch model to CoreML?"
→ Invoke: /skill coreml
User: "Compress my model to fit on iPhone"
→ Invoke: /skill coreml
User: "Implement KV-cache for my language model"
→ Invoke: /skill coreml
User: "Model loads slowly on first launch"
→ Invoke: /skill coreml-diag
User: "My compressed model has bad accuracy"
→ Invoke: /skill coreml-diag
User: "Add live transcription to my app"
→ Invoke: /skill speech
User: "Transcribe audio files with SpeechAnalyzer"
→ Invoke: /skill speech
User: "What's MLTensor and how do I use it?"
→ Invoke: /skill coreml-ref