vm-provisioning
Use when the user asks about creating, configuring, or cloning VMs on Unraid, including libvirt, virsh, cloud-init, GPU passthrough, or VFIO setup. Examples: "create vm", "provision vm", "gpu passthrough", "cloud-init unraid", "virsh commands".
From unraid-assistantnpx claudepluginhub jamesprial/prial-plugins --plugin unraid-assistantThis skill uses the workspace's default tool permissions.
references/vm-xml-template.mdVM Provisioning on Unraid
Architecture
Unraid uses three layers for virtualization:
| Layer | Role |
|---|---|
| KVM | Kernel hypervisor module |
| QEMU | Hardware emulation (disk, network, display) |
| libvirt | Management daemon, XML domain definitions, virsh CLI |
CRITICAL: The GraphQL API cannot create VMs. It only supports lifecycle operations (start, stop, pause) on existing VMs. Use libvirt for all VM provisioning.
Storage Layout
| Path | Contents |
|---|---|
/mnt/user/system/libvirt/libvirt.img | Loop-mounted image containing all VM XML definitions |
/mnt/user/domains/<VM_NAME>/vdisk1.img | VM disk images (qcow2 or raw) |
/mnt/user/isos/ | ISO images for installation or cloud-init |
virsh Commands
| Command | Purpose |
|---|---|
virsh list --all | List all VMs with state |
virsh define myvm.xml | Register VM from XML file |
virsh undefine MyVM | Remove VM definition (keeps disk) |
virsh start MyVM | Start a defined VM |
virsh shutdown MyVM | Graceful ACPI shutdown |
virsh destroy MyVM | Force power off |
virsh dumpxml MyVM | Export current XML definition |
virsh edit MyVM | Edit XML in-place |
virsh snapshot-create-as --domain MyVM --name snap1 --disk-only --atomic | Create disk snapshot |
virsh vol-create-as domains disk.qcow2 80G --format qcow2 | Create storage volume |
virsh nodedev-detach pci_0000_01_00_0 | Detach PCI device for passthrough |
virsh nodedev-reattach pci_0000_01_00_0 | Reattach PCI device to host |
Python libvirt API
import libvirt
# Local connection (on Unraid host)
conn = libvirt.open("qemu:///system")
# Remote connection (over SSH)
conn = libvirt.open("qemu+ssh://root@192.168.1.100/system")
# Define and start a VM
with open("myvm.xml") as f:
xml = f.read()
dom = conn.defineXML(xml)
dom.create() # start the VM
# List all domains
for dom in conn.listAllDomains():
print(dom.name(), dom.state())
conn.close()
Cloud-Init NoCloud Workflow
Provision headless Linux VMs without interactive installation:
1. Create VM Disk from Cloud Image
qemu-img create -f qcow2 \
-b /mnt/user/isos/jammy-server-cloudimg-amd64.img \
-F qcow2 /mnt/user/domains/MyVM/vdisk1.img 40G
2. Create user-data
#cloud-config
hostname: myvm
users:
- name: admin
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-ed25519 AAAA... user@host
packages:
- qemu-guest-agent
runcmd:
- systemctl enable --now qemu-guest-agent
3. Create meta-data
instance-id: myvm-001
local-hostname: myvm
4. Generate Cloud-Init ISO
genisoimage -output /mnt/user/isos/myvm-cloud-init.iso \
-volid cidata -rational-rock -joliet \
/tmp/user-data /tmp/meta-data
5. Define and Start VM
Attach the ISO as a CD-ROM in the VM XML (see the reference template), then:
virsh define /tmp/myvm.xml
virsh start MyVM
The VM boots, applies cloud-init configuration, and is accessible via SSH.
GPU Passthrough
VFIO Bindings
Edit /boot/config/vfio-pci.cfg to bind PCI devices at boot:
BIND=0000:01:00.0|10de:2484
BIND=0000:01:00.1|10de:228b
Format: BIND=<PCI_ADDRESS>|<VENDOR>:<DEVICE_ID>
Kernel Parameters
Edit /boot/syslinux/syslinux.cfg to add IOMMU and device stubbing:
append ... iommu=pt intel_iommu=on vfio-pci.ids=10de:2484,10de:228b
Runtime Binding
virsh nodedev-detach pci_0000_01_00_0
virsh nodedev-detach pci_0000_01_00_1
XML hostdev Block
Add to the VM XML <devices> section:
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</source>
</hostdev>
IOMMU Group Enumeration
for d in /sys/kernel/iommu_groups/*/devices/*; do
n="${d##*/}"; g="${d%/devices/*}"; g="${g##*/}"
echo "IOMMU Group $g: $(lspci -nns "$n")"
done
VM Cloning
Full Clone
virt-clone --original=SourceVM --name=CloneVM --auto-clone
Creates independent copies of all disks. The clone can run simultaneously with the source.
Linked Clone (COW)
qemu-img create -f qcow2 \
-b /mnt/user/domains/SourceVM/vdisk1.img \
-F qcow2 /mnt/user/domains/CloneVM/vdisk1.img
Instant creation, shares base image. Only stores differences. The backing file must remain accessible.
XML Template Reference
See ${CLAUDE_PLUGIN_ROOT}/skills/vm-provisioning/references/vm-xml-template.md for a complete annotated libvirt domain XML template with UEFI boot, cloud-init, GPU passthrough, and USB passthrough examples.