Skill

vm-provisioning

Use when the user asks about creating, configuring, or cloning VMs on Unraid, including libvirt, virsh, cloud-init, GPU passthrough, or VFIO setup. Examples: "create vm", "provision vm", "gpu passthrough", "cloud-init unraid", "virsh commands".

From unraid-assistant
Install
1
Run in your terminal
$
npx claudepluginhub jamesprial/prial-plugins --plugin unraid-assistant
Tool Access

This skill uses the workspace's default tool permissions.

Supporting Assets
View in Repository
references/vm-xml-template.md
Skill Content

VM Provisioning on Unraid

Architecture

Unraid uses three layers for virtualization:

LayerRole
KVMKernel hypervisor module
QEMUHardware emulation (disk, network, display)
libvirtManagement daemon, XML domain definitions, virsh CLI

CRITICAL: The GraphQL API cannot create VMs. It only supports lifecycle operations (start, stop, pause) on existing VMs. Use libvirt for all VM provisioning.

Storage Layout

PathContents
/mnt/user/system/libvirt/libvirt.imgLoop-mounted image containing all VM XML definitions
/mnt/user/domains/<VM_NAME>/vdisk1.imgVM disk images (qcow2 or raw)
/mnt/user/isos/ISO images for installation or cloud-init

virsh Commands

CommandPurpose
virsh list --allList all VMs with state
virsh define myvm.xmlRegister VM from XML file
virsh undefine MyVMRemove VM definition (keeps disk)
virsh start MyVMStart a defined VM
virsh shutdown MyVMGraceful ACPI shutdown
virsh destroy MyVMForce power off
virsh dumpxml MyVMExport current XML definition
virsh edit MyVMEdit XML in-place
virsh snapshot-create-as --domain MyVM --name snap1 --disk-only --atomicCreate disk snapshot
virsh vol-create-as domains disk.qcow2 80G --format qcow2Create storage volume
virsh nodedev-detach pci_0000_01_00_0Detach PCI device for passthrough
virsh nodedev-reattach pci_0000_01_00_0Reattach PCI device to host

Python libvirt API

import libvirt

# Local connection (on Unraid host)
conn = libvirt.open("qemu:///system")

# Remote connection (over SSH)
conn = libvirt.open("qemu+ssh://root@192.168.1.100/system")

# Define and start a VM
with open("myvm.xml") as f:
    xml = f.read()
dom = conn.defineXML(xml)
dom.create()  # start the VM

# List all domains
for dom in conn.listAllDomains():
    print(dom.name(), dom.state())

conn.close()

Cloud-Init NoCloud Workflow

Provision headless Linux VMs without interactive installation:

1. Create VM Disk from Cloud Image

qemu-img create -f qcow2 \
  -b /mnt/user/isos/jammy-server-cloudimg-amd64.img \
  -F qcow2 /mnt/user/domains/MyVM/vdisk1.img 40G

2. Create user-data

#cloud-config
hostname: myvm
users:
  - name: admin
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-ed25519 AAAA... user@host
packages:
  - qemu-guest-agent
runcmd:
  - systemctl enable --now qemu-guest-agent

3. Create meta-data

instance-id: myvm-001
local-hostname: myvm

4. Generate Cloud-Init ISO

genisoimage -output /mnt/user/isos/myvm-cloud-init.iso \
  -volid cidata -rational-rock -joliet \
  /tmp/user-data /tmp/meta-data

5. Define and Start VM

Attach the ISO as a CD-ROM in the VM XML (see the reference template), then:

virsh define /tmp/myvm.xml
virsh start MyVM

The VM boots, applies cloud-init configuration, and is accessible via SSH.

GPU Passthrough

VFIO Bindings

Edit /boot/config/vfio-pci.cfg to bind PCI devices at boot:

BIND=0000:01:00.0|10de:2484
BIND=0000:01:00.1|10de:228b

Format: BIND=<PCI_ADDRESS>|<VENDOR>:<DEVICE_ID>

Kernel Parameters

Edit /boot/syslinux/syslinux.cfg to add IOMMU and device stubbing:

append ... iommu=pt intel_iommu=on vfio-pci.ids=10de:2484,10de:228b

Runtime Binding

virsh nodedev-detach pci_0000_01_00_0
virsh nodedev-detach pci_0000_01_00_1

XML hostdev Block

Add to the VM XML <devices> section:

<hostdev mode="subsystem" type="pci" managed="yes">
  <source>
    <address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
  </source>
</hostdev>

IOMMU Group Enumeration

for d in /sys/kernel/iommu_groups/*/devices/*; do
  n="${d##*/}"; g="${d%/devices/*}"; g="${g##*/}"
  echo "IOMMU Group $g: $(lspci -nns "$n")"
done

VM Cloning

Full Clone

virt-clone --original=SourceVM --name=CloneVM --auto-clone

Creates independent copies of all disks. The clone can run simultaneously with the source.

Linked Clone (COW)

qemu-img create -f qcow2 \
  -b /mnt/user/domains/SourceVM/vdisk1.img \
  -F qcow2 /mnt/user/domains/CloneVM/vdisk1.img

Instant creation, shares base image. Only stores differences. The backing file must remain accessible.

XML Template Reference

See ${CLAUDE_PLUGIN_ROOT}/skills/vm-provisioning/references/vm-xml-template.md for a complete annotated libvirt domain XML template with UEFI boot, cloud-init, GPU passthrough, and USB passthrough examples.

Similar Skills
cache-components

Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.

138.5k
Stats
Stars0
Forks0
Last CommitFeb 17, 2026