Use when implementing ANY computer vision feature - image analysis, object detection, pose detection, person segmentation, subject lifting, hand/body pose tracking.
Implements iOS computer vision features including pose detection, segmentation, OCR, barcode scanning, and Visual Intelligence integration.
npx claudepluginhub charleswiltgen/axiomThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You MUST use this skill for ANY computer vision work using the Vision framework.
Use this router when:
Implementation patterns → /skill axiom-vision
API reference → /skill axiom-vision-ref
Visual Intelligence integration → /skill axiom-vision-ref (see Visual Intelligence Integration section)
IntentValueQuery and SemanticContentDescriptorDiagnostics → /skill axiom-vision-diag
| Thought | Reality |
|---|---|
| "Vision framework is just a request/handler pattern" | Vision has coordinate conversion, confidence thresholds, and performance gotchas. vision covers them. |
| "I'll handle text recognition without the skill" | VNRecognizeTextRequest has fast/accurate modes and language-specific settings. vision has the patterns. |
| "Subject segmentation is straightforward" | Instance masks have HDR compositing and hand-exclusion patterns. vision covers complex scenarios. |
| "Visual Intelligence is just the camera API" | Visual Intelligence is a system-level feature requiring IntentValueQuery and SemanticContentDescriptor. vision-ref has the integration section. |
vision:
vision-diag:
User: "How do I detect hand pose in an image?"
→ Invoke: /skill axiom-vision
User: "Isolate a subject but exclude the user's hands"
→ Invoke: /skill axiom-vision
User: "How do I read text from an image?"
→ Invoke: /skill axiom-vision
User: "Scan QR codes with the camera"
→ Invoke: /skill axiom-vision
User: "How do I implement document scanning?"
→ Invoke: /skill axiom-vision
User: "Use DataScannerViewController for live text"
→ Invoke: /skill axiom-vision
User: "Subject detection isn't working"
→ Invoke: /skill axiom-vision-diag
User: "Text recognition returns wrong characters"
→ Invoke: /skill axiom-vision-diag
User: "Barcode not being detected"
→ Invoke: /skill axiom-vision-diag
User: "Show me VNDetectHumanBodyPoseRequest examples"
→ Invoke: /skill axiom-vision-ref
User: "What symbologies does VNDetectBarcodesRequest support?"
→ Invoke: /skill axiom-vision-ref
User: "RecognizeDocumentsRequest API reference"
→ Invoke: /skill axiom-vision-ref
User: "How do I make my app work with Visual Intelligence?"
→ Invoke: /skill axiom-vision-ref
User: "How do users discover my app content through the camera?"
→ Invoke: /skill axiom-vision-ref
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.