npx claudepluginhub charleswiltgen/axiom --plugin axiomThis skill uses the workspace's default tool permissions.
**You MUST use this skill for ANY computer vision work using the Vision framework.**
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
You MUST use this skill for ANY computer vision work using the Vision framework.
Use this router when:
Implementation patterns → /skill axiom-vision
API reference → /skill axiom-vision-ref
Visual Intelligence integration → /skill axiom-vision-ref (see Visual Intelligence Integration section)
IntentValueQuery and SemanticContentDescriptorDiagnostics → /skill axiom-vision-diag
| Thought | Reality |
|---|---|
| "Vision framework is just a request/handler pattern" | Vision has coordinate conversion, confidence thresholds, and performance gotchas. vision covers them. |
| "I'll handle text recognition without the skill" | VNRecognizeTextRequest has fast/accurate modes and language-specific settings. vision has the patterns. |
| "Subject segmentation is straightforward" | Instance masks have HDR compositing and hand-exclusion patterns. vision covers complex scenarios. |
| "Visual Intelligence is just the camera API" | Visual Intelligence is a system-level feature requiring IntentValueQuery and SemanticContentDescriptor. vision-ref has the integration section. |
vision:
vision-diag:
User: "How do I detect hand pose in an image?"
→ Invoke: /skill axiom-vision
User: "Isolate a subject but exclude the user's hands"
→ Invoke: /skill axiom-vision
User: "How do I read text from an image?"
→ Invoke: /skill axiom-vision
User: "Scan QR codes with the camera"
→ Invoke: /skill axiom-vision
User: "How do I implement document scanning?"
→ Invoke: /skill axiom-vision
User: "Use DataScannerViewController for live text"
→ Invoke: /skill axiom-vision
User: "Subject detection isn't working"
→ Invoke: /skill axiom-vision-diag
User: "Text recognition returns wrong characters"
→ Invoke: /skill axiom-vision-diag
User: "Barcode not being detected"
→ Invoke: /skill axiom-vision-diag
User: "Show me VNDetectHumanBodyPoseRequest examples"
→ Invoke: /skill axiom-vision-ref
User: "What symbologies does VNDetectBarcodesRequest support?"
→ Invoke: /skill axiom-vision-ref
User: "RecognizeDocumentsRequest API reference"
→ Invoke: /skill axiom-vision-ref
User: "How do I make my app work with Visual Intelligence?"
→ Invoke: /skill axiom-vision-ref
User: "How do users discover my app content through the camera?"
→ Invoke: /skill axiom-vision-ref