Use when deploying ANY machine learning model on-device, converting models to CoreML, compressing models, or implementing speech-to-text. Covers CoreML conversion, MLTensor, model compression (quantization/palettization/pruning), stateful models, KV-cache, multi-function models, async prediction, SpeechAnalyzer, SpeechTranscriber.
Deploys custom machine learning models on iOS devices via CoreML conversion, compression, and speech transcription.
npx claudepluginhub charleswiltgen/axiomThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You MUST use this skill for ANY on-device machine learning or speech-to-text work.
Use this router when:
ios-ml vs ios-ai — know the difference:
| Developer Intent | Router |
|---|---|
| "Use Apple Intelligence / Foundation Models" | ios-ai — Apple's on-device LLM |
| "Run my own ML model on device" | ios-ml — CoreML conversion + deployment |
| "Add text generation with @Generable" | ios-ai — Foundation Models structured output |
| "Deploy a custom LLM with KV-cache" | ios-ml — Custom model optimization |
| "Use Vision framework for image analysis" | ios-vision — Not ML deployment |
| "Use pre-trained Apple NLP models" | ios-ai — Apple's models, not custom |
Rule of thumb: If the developer is converting/compressing/deploying their own model → ios-ml. If they're using Apple's built-in AI → ios-ai. If they're doing computer vision → ios-vision.
Implementation patterns → /skill coreml
API reference → /skill coreml-ref
Diagnostics → /skill coreml-diag
Implementation patterns → /skill speech
| Thought | Reality |
|---|---|
| "CoreML is just load and predict" | CoreML has compression, stateful models, compute unit selection, and async prediction. coreml covers all. |
| "My model is small, no optimization needed" | Even small models benefit from compute unit selection and async prediction. coreml has the patterns. |
| "I'll just use SFSpeechRecognizer" | iOS 26 has SpeechAnalyzer with better accuracy and offline support. speech skill covers the modern API. |
coreml:
coreml-diag:
speech:
User: "How do I convert a PyTorch model to CoreML?"
→ Invoke: /skill coreml
User: "Compress my model to fit on iPhone"
→ Invoke: /skill coreml
User: "Implement KV-cache for my language model"
→ Invoke: /skill coreml
User: "Model loads slowly on first launch"
→ Invoke: /skill coreml-diag
User: "My compressed model has bad accuracy"
→ Invoke: /skill coreml-diag
User: "Add live transcription to my app"
→ Invoke: /skill speech
User: "Transcribe audio files with SpeechAnalyzer"
→ Invoke: /skill speech
User: "What's MLTensor and how do I use it?"
→ Invoke: /skill coreml-ref
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.