From rshankras-claude-code-apple-skills
Implements on-device AI for iOS apps using Apple Intelligence: Foundation Models for LLMs, Visual Intelligence for camera search, App Intents for Siri/Shortcuts/Spotlight.
npx claudepluginhub joshuarweaver/cascade-code-languages-misc-1 --plugin rshankras-claude-code-apple-skillsThis skill is limited to using the following tools:
Skills for implementing Apple Intelligence features including on-device LLMs, visual recognition, App Intents integration, and intelligent assistants.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Skills for implementing Apple Intelligence features including on-device LLMs, visual recognition, App Intents integration, and intelligent assistants.
Use this skill when the user:
On-device LLM integration with prompt engineering best practices.
Integrate with iOS Visual Intelligence for camera-based search.
App Intents for Siri, Shortcuts, Spotlight, and Apple Intelligence.
/Users/ravishankar/Downloads/docs/FoundationModels-Using-on-device-LLM-in-your-app.md/Users/ravishankar/Downloads/docs/Implementing-Visual-Intelligence-in-iOS.md/Users/ravishankar/Downloads/docs/AppIntents-Updates.md