From apple-kit-skills
Build AR experiences on iOS with RealityKit and ARKit. Covers RealityView for 3D content, entity loading, raycasting placement, world tracking, scene understanding, and gesture interactions.
npx claudepluginhub dpearson2699/swift-ios-skills --plugin all-ios-skillsThis skill uses the workspace's default tool permissions.
Build AR experiences on iOS using RealityKit for rendering and ARKit for world
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Build AR experiences on iOS using RealityKit for rendering and ARKit for world
tracking. Covers RealityView, entity management, raycasting, scene
understanding, and gesture-based interactions. Targets Swift 6.3 / iOS 26+.
NSCameraUsageDescription to Info.plistRealityViewCameraContent (iOS 18+, macOS 15+)AR features require devices with an A9 chip or later. Always verify support before presenting AR UI.
import ARKit
guard ARWorldTrackingConfiguration.isSupported else {
showUnsupportedDeviceMessage()
return
}
| Type | Platform | Role |
|---|---|---|
RealityView | iOS 18+, visionOS 1+ | SwiftUI view that hosts RealityKit content |
RealityViewCameraContent | iOS 18+, macOS 15+ | Content displayed through the device camera |
Entity | All | Base class for all scene objects |
ModelEntity | All | Entity with a visible 3D model |
AnchorEntity | All | Tethers entities to a real-world anchor |
RealityView is the SwiftUI entry point for RealityKit. On iOS, it provides
RealityViewCameraContent which renders through the device camera for AR.
import SwiftUI
import RealityKit
struct ARExperienceView: View {
var body: some View {
RealityView { content in
// content is RealityViewCameraContent on iOS
let sphere = ModelEntity(
mesh: .generateSphere(radius: 0.05),
materials: [SimpleMaterial(
color: .blue,
isMetallic: true
)]
)
sphere.position = [0, 0, -0.5] // 50cm in front of camera
content.add(sphere)
}
}
}
Use the update closure to respond to SwiftUI state changes:
struct PlacementView: View {
@State private var modelColor: UIColor = .red
var body: some View {
RealityView { content in
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(
color: .red,
isMetallic: false
)]
)
box.name = "colorBox"
box.position = [0, 0, -0.5]
content.add(box)
} update: { content in
if let box = content.entities.first(
where: { $0.name == "colorBox" }
) as? ModelEntity {
box.model?.materials = [SimpleMaterial(
color: modelColor,
isMetallic: false
)]
}
}
Button("Change Color") {
modelColor = modelColor == .red ? .green : .red
}
}
}
Load 3D models asynchronously to avoid blocking the main thread:
RealityView { content in
if let robot = try? await ModelEntity(named: "robot") {
robot.position = [0, -0.2, -0.8]
robot.scale = [0.01, 0.01, 0.01]
content.add(robot)
}
}
// Box
let box = ModelEntity(
mesh: .generateBox(size: [0.1, 0.2, 0.1], cornerRadius: 0.005),
materials: [SimpleMaterial(color: .gray, isMetallic: true)]
)
// Sphere
let sphere = ModelEntity(
mesh: .generateSphere(radius: 0.05),
materials: [SimpleMaterial(color: .blue, roughness: 0.2, isMetallic: true)]
)
// Plane
let plane = ModelEntity(
mesh: .generatePlane(width: 0.3, depth: 0.3),
materials: [SimpleMaterial(color: .green, isMetallic: false)]
)
Entities use an ECS (Entity Component System) architecture. Add components to give entities behavior:
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: false)]
)
// Make it respond to physics
box.components.set(PhysicsBodyComponent(
massProperties: .default,
material: .default,
mode: .dynamic
))
// Add collision shape for interaction
box.components.set(CollisionComponent(
shapes: [.generateBox(size: [0.1, 0.1, 0.1])]
))
// Enable input targeting for gestures
box.components.set(InputTargetComponent())
Use AnchorEntity to anchor content to detected surfaces or world positions:
RealityView { content in
// Anchor to a horizontal surface
let floorAnchor = AnchorEntity(.plane(
.horizontal,
classification: .floor,
minimumBounds: [0.2, 0.2]
))
let model = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .orange, isMetallic: false)]
)
floorAnchor.addChild(model)
content.add(floorAnchor)
}
| Target | Description |
|---|---|
.plane(.horizontal, ...) | Horizontal surfaces (floors, tables) |
.plane(.vertical, ...) | Vertical surfaces (walls) |
.plane(.any, ...) | Any detected plane |
.world(transform:) | Fixed world-space position |
Use RealityViewCameraContent to convert between SwiftUI view coordinates
and RealityKit world space. Pair with SpatialTapGesture to place objects
where the user taps on a detected surface.
struct DraggableARView: View {
var body: some View {
RealityView { content in
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .blue, isMetallic: true)]
)
box.position = [0, 0, -0.5]
box.components.set(CollisionComponent(
shapes: [.generateBox(size: [0.1, 0.1, 0.1])]
))
box.components.set(InputTargetComponent())
box.name = "draggable"
content.add(box)
}
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
guard let parent = entity.parent else { return }
entity.position = value.convert(
value.location3D,
from: .local,
to: parent
)
}
)
}
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
let tappedEntity = value.entity
highlightEntity(tappedEntity)
}
)
Subscribe to scene update events for continuous processing:
RealityView { content in
let entity = ModelEntity(
mesh: .generateSphere(radius: 0.05),
materials: [SimpleMaterial(color: .yellow, isMetallic: false)]
)
entity.position = [0, 0, -0.5]
content.add(entity)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
let time = Float(event.deltaTime)
entity.position.y += sin(Float(Date().timeIntervalSince1970)) * time * 0.1
}
}
On visionOS, ARKit provides a different API surface with ARKitSession,
WorldTrackingProvider, and PlaneDetectionProvider. These visionOS-specific
types are not available on iOS. On iOS, RealityKit handles world tracking
automatically through RealityViewCameraContent.
Not all devices support AR. Showing a black camera view with no feedback confuses users.
// WRONG -- no device check
struct MyARView: View {
var body: some View {
RealityView { content in
// Fails silently on unsupported devices
}
}
}
// CORRECT -- check support and show fallback
struct MyARView: View {
var body: some View {
if ARWorldTrackingConfiguration.isSupported {
RealityView { content in
// AR content
}
} else {
ContentUnavailableView(
"AR Not Supported",
systemImage: "arkit",
description: Text("This device does not support AR.")
)
}
}
}
Loading large USDZ files on the main thread causes frame drops and hangs.
The make closure of RealityView is async -- use it.
// WRONG -- synchronous load blocks the main thread
RealityView { content in
let model = try! Entity.load(named: "large-scene")
content.add(model)
}
// CORRECT -- async load
RealityView { content in
if let model = try? await ModelEntity(named: "large-scene") {
content.add(model)
}
}
Gestures only work on entities that have both CollisionComponent and
InputTargetComponent. Without them, taps and drags pass through.
// WRONG -- entity ignores gestures
let box = ModelEntity(mesh: .generateBox(size: 0.1))
content.add(box)
// CORRECT -- add collision and input components
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: false)]
)
box.components.set(CollisionComponent(
shapes: [.generateBox(size: [0.1, 0.1, 0.1])]
))
box.components.set(InputTargetComponent())
content.add(box)
The update closure runs on every SwiftUI state change. Creating entities
there duplicates content on each render pass.
// WRONG -- duplicates entities on every state change
RealityView { content in
// empty
} update: { content in
let sphere = ModelEntity(mesh: .generateSphere(radius: 0.05))
content.add(sphere) // Added again on every update
}
// CORRECT -- create in make, modify in update
RealityView { content in
let sphere = ModelEntity(mesh: .generateSphere(radius: 0.05))
sphere.name = "mySphere"
content.add(sphere)
} update: { content in
if let sphere = content.entities.first(
where: { $0.name == "mySphere" }
) as? ModelEntity {
// Modify existing entity
sphere.position.y = newYPosition
}
}
RealityKit on iOS needs camera access. If the user denies permission, the view shows a black screen with no explanation.
// WRONG -- no permission handling
RealityView { content in
// Black screen if camera denied
}
// CORRECT -- check and request permission
struct ARContainerView: View {
@State private var cameraAuthorized = false
var body: some View {
Group {
if cameraAuthorized {
RealityView { content in
// AR content
}
} else {
ContentUnavailableView(
"Camera Access Required",
systemImage: "camera.fill",
description: Text("Enable camera in Settings to use AR.")
)
}
}
.task {
let status = AVCaptureDevice.authorizationStatus(for: .video)
if status == .authorized {
cameraAuthorized = true
} else if status == .notDetermined {
cameraAuthorized = await AVCaptureDevice
.requestAccess(for: .video)
}
}
}
}
NSCameraUsageDescription set in Info.plistmake closuremake, modified in update (not created in update)CollisionComponent and InputTargetComponentSceneEvents.Update subscriptions used for per-frame logic (not SwiftUI timers)ModelEntity(named:) async loading, not Entity.load(named:)update closure