From apple-dev
Core ML, Create ML, Vision framework, Natural Language framework, on-device ML integration. Use when user wants image classification, text analysis, object detection, sound classification, model optimization, or custom model integration. Covers Core ML vs Foundation Models decision.
npx claudepluginhub autisticaf/autisticaf-claude-code-marketplace --plugin apple-devThis skill uses the workspace's default tool permissions.
> **First step:** Tell the user: "core-ml skill loaded."
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
First step: Tell the user: "core-ml skill loaded."
Combined advisory, generator, and workflow skill for integrating machine learning into Apple platform apps. Covers Core ML model integration, Vision framework image analysis, NaturalLanguage framework text processing, Create ML training, and on-device model optimization.
Use this skill when the user:
Before generating code, determine which framework is appropriate.
@Generable structured output from natural languageapple-intelligence-foundation-models/ skill for implementationVNRecognizeTextRequestSearch for existing ML integration:
Glob: **/*Model*.swift, **/*Classifier*.swift, **/*Predictor*.swift, **/*.mlmodel, **/*.mlmodelc, **/*.mlpackage
Grep: "import CoreML" or "import Vision" or "import NaturalLanguage"
If found, ask user:
Ask user via AskUserQuestion:
What ML capability do you need?
Do you have a trained model, or need to train one?
Performance requirements?
.mlmodel or .mlpackage into Xcode project navigator.mlmodelc at build time (optimized for device)// Option 1: Auto-generated class (simplest)
let model = try MyImageClassifier(configuration: MLModelConfiguration())
// Option 2: Generic MLModel loading (flexible)
let url = Bundle.main.url(forResource: "MyModel", withExtension: "mlmodelc")!
let config = MLModelConfiguration()
config.computeUnits = .all // CPU + GPU + Neural Engine
let model = try MLModel(contentsOf: url, configuration: config)
// Option 3: Async loading (recommended for large models)
let model = try await MLModel.load(contentsOf: url, configuration: config)
// Type-safe prediction with auto-generated class
let input = MyImageClassifierInput(image: pixelBuffer)
let output = try model.prediction(input: input)
print(output.classLabel) // "cat"
print(output.classLabelProbs) // ["cat": 0.95, "dog": 0.04, ...]
// Batch predictions
let batch = MLArrayBatchProvider(array: inputs)
let results = try model.predictions(from: batch)
| Capability | Request Class | Custom Model Needed? |
|---|---|---|
| Image classification | VNClassifyImageRequest | No (built-in) |
| Object detection | VNDetectObjectsRequest (custom model) | Yes |
| Face detection | VNDetectFaceRectanglesRequest | No |
| Face landmarks | VNDetectFaceLandmarksRequest | No |
| Text recognition (OCR) | VNRecognizeTextRequest | No |
| Body pose | VNDetectHumanBodyPoseRequest | No |
| Hand pose | VNDetectHumanHandPoseRequest | No |
| Barcode detection | VNDetectBarcodesRequest | No |
| Image saliency | VNGenerateAttentionBasedSaliencyImageRequest | No |
| Horizon detection | VNDetectHorizonRequest | No |
| Rectangle detection | VNDetectRectanglesRequest | No |
| Image similarity | VNGenerateImageFeaturePrintRequest | No |
// Multiple requests on the same image
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
try handler.perform([
textRequest, // OCR
faceRequest, // Face detection
barcodeRequest // Barcode scanning
])
// Each request's results are populated independently
Returns a score from -1.0 (negative) to +1.0 (positive):
let tagger = NLTagger(tagSchemes: [.sentimentScore])
tagger.string = "This app is amazing!"
let (tag, _) = tagger.tag(at: text.startIndex, unit: .paragraph, scheme: .sentimentScore)
// tag?.rawValue == "0.9" (positive)
let language = NLLanguageRecognizer.dominantLanguage(for: "Bonjour le monde")
// language == .french
let tokenizer = NLTokenizer(unit: .word)
tokenizer.string = "Hello, world!"
tokenizer.enumerateTokens(in: text.startIndex..<text.endIndex) { range, _ in
print(text[range]) // "Hello" then "world"
return true
}
let tagger = NLTagger(tagSchemes: [.nameType])
tagger.string = "Tim Cook visited Apple Park in Cupertino."
tagger.enumerateTags(in: text.startIndex..<text.endIndex, unit: .word, scheme: .nameType) { tag, range in
if let tag, tag != .other {
print("\(text[range]): \(tag.rawValue)")
// "Tim": PersonalName, "Cook": PersonalName
// "Apple Park": OrganizationName, "Cupertino": PlaceName
}
return true
}
Reduces model size by lowering numerical precision:
import coremltools as ct
from coremltools.models.neural_network import quantization_utils
model = ct.models.MLModel("MyModel.mlmodel")
# Float16 quantization (safe default)
model_fp16 = quantization_utils.quantize_weights(model, nbits=16)
model_fp16.save("MyModel_fp16.mlmodel")
# Int8 quantization (aggressive, test accuracy)
model_int8 = quantization_utils.quantize_weights(model, nbits=8)
model_int8.save("MyModel_int8.mlmodel")
Reduces unique weight values using k-means clustering:
from coremltools.optimize.coreml import palettize_weights, OpPalettizerConfig
config = OpPalettizerConfig(nbits=4) # 16 unique values per tensor
model_palettized = palettize_weights(model, config)
Removes near-zero weights (sparse model):
from coremltools.optimize.torch.pruning import MagnitudePruner, MagnitudePrunerConfig
config = MagnitudePrunerConfig(target_sparsity=0.75) # Remove 75% of weights
pruner = MagnitudePruner(model, config)
let config = MLModelConfiguration()
// Best performance — let system choose CPU, GPU, or Neural Engine
config.computeUnits = .all
// CPU only — predictable latency, no GPU/NE contention
config.computeUnits = .cpuOnly
// CPU + Neural Engine — good balance, avoids GPU contention with UI
config.computeUnits = .cpuAndNeuralEngine
// CPU + GPU — when Neural Engine unavailable
config.computeUnits = .cpuAndGPU
func classify(_ image: UIImage) async throws -> String {
let model = try await MLModelManager.shared.model(named: "Classifier")
// Prediction runs off main thread via structured concurrency
let input = try MLDictionaryFeatureProvider(dictionary: ["image": image.pixelBuffer!])
let result = try await Task.detached {
try model.prediction(from: input)
}.value
return result.featureValue(for: "classLabel")?.stringValue ?? "unknown"
}
// Process multiple images efficiently
let inputs = images.map { MyModelInput(image: $0.pixelBuffer!) }
let batch = MLArrayBatchProvider(array: inputs)
let results = try model.predictions(from: batch)
for i in 0..<results.count {
let output = results.features(at: i)
print(output.featureValue(for: "classLabel")?.stringValue ?? "")
}
// Compile .mlmodel to .mlmodelc at install (not runtime)
// This is done automatically when you add .mlmodel to Xcode target
// For downloaded models, compile once and cache:
let compiledURL = try MLModel.compileModel(at: downloadedModelURL)
let permanentURL = appSupportDir.appendingPathComponent("MyModel.mlmodelc")
try FileManager.default.copyItem(at: compiledURL, to: permanentURL)
Based on user's answer to configuration questions, select the appropriate template(s) from templates.md.
| Capability | Files Generated |
|---|---|
| Any Core ML | MLModelManager.swift |
| Image classification | ImageClassifier.swift |
| Text analysis | TextAnalyzer.swift |
| Vision requests | VisionService.swift |
| Custom model | ModelConfig.swift + model-specific predictor |
| Camera + ML | CameraMLPipeline.swift |
Check project structure:
Sources/ exists -> Sources/ML/App/Services/ exists -> App/Services/ML/App/ exists -> App/ML/ML/After generation, provide:
ML/
├── MLModelManager.swift # Central model lifecycle management
├── ImageClassifier.swift # Vision-based image classification (if needed)
├── TextAnalyzer.swift # NaturalLanguage wrapper (if needed)
├── ModelConfig.swift # Compute unit configuration
└── VisionService.swift # Vision request pipeline (if needed)
.mlmodel file to Xcode project (if using custom model)