From unraid-assistant
Use when the user asks about creating, configuring, or cloning VMs on Unraid, including libvirt, virsh, cloud-init, GPU passthrough, or VFIO setup. Examples: "create vm", "provision vm", "gpu passthrough", "cloud-init unraid", "virsh commands".
npx claudepluginhub jamesprial/prial-plugins --plugin unraid-assistantThis skill uses the workspace's default tool permissions.
Unraid uses three layers for virtualization:
Manages Proxmox VE via REST API: create/clone/start/stop VMs and LXC containers, handle snapshots, backups, storage, templates using curl and jq. For Proxmox infrastructure tasks.
Deploys virtual machines in Kube-DC projects using KubeVirt DataVolumes, with SSH access, cloud-init configuration, and optional external IP exposure. Supports Ubuntu, Debian, CentOS, Fedora, Alpine, openSUSE, Gentoo, Windows images.
Builds Linux kernels from source, creates Fedora rootfs with mkosi, and boots QEMU VMs supporting custom NUMA/CXL/NVDIMM topologies. Manages VMs via MCP tools for SSH access, testing, and lifecycle control.
Share bugs, ideas, or general feedback.
Unraid uses three layers for virtualization:
| Layer | Role |
|---|---|
| KVM | Kernel hypervisor module |
| QEMU | Hardware emulation (disk, network, display) |
| libvirt | Management daemon, XML domain definitions, virsh CLI |
CRITICAL: The GraphQL API cannot create VMs. It only supports lifecycle operations (start, stop, pause) on existing VMs. Use libvirt for all VM provisioning.
| Path | Contents |
|---|---|
/mnt/user/system/libvirt/libvirt.img | Loop-mounted image containing all VM XML definitions |
/mnt/user/domains/<VM_NAME>/vdisk1.img | VM disk images (qcow2 or raw) |
/mnt/user/isos/ | ISO images for installation or cloud-init |
| Command | Purpose |
|---|---|
virsh list --all | List all VMs with state |
virsh define myvm.xml | Register VM from XML file |
virsh undefine MyVM | Remove VM definition (keeps disk) |
virsh start MyVM | Start a defined VM |
virsh shutdown MyVM | Graceful ACPI shutdown |
virsh destroy MyVM | Force power off |
virsh dumpxml MyVM | Export current XML definition |
virsh edit MyVM | Edit XML in-place |
virsh snapshot-create-as --domain MyVM --name snap1 --disk-only --atomic | Create disk snapshot |
virsh vol-create-as domains disk.qcow2 80G --format qcow2 | Create storage volume |
virsh nodedev-detach pci_0000_01_00_0 | Detach PCI device for passthrough |
virsh nodedev-reattach pci_0000_01_00_0 | Reattach PCI device to host |
import libvirt
# Local connection (on Unraid host)
conn = libvirt.open("qemu:///system")
# Remote connection (over SSH)
conn = libvirt.open("qemu+ssh://root@192.168.1.100/system")
# Define and start a VM
with open("myvm.xml") as f:
xml = f.read()
dom = conn.defineXML(xml)
dom.create() # start the VM
# List all domains
for dom in conn.listAllDomains():
print(dom.name(), dom.state())
conn.close()
Provision headless Linux VMs without interactive installation:
qemu-img create -f qcow2 \
-b /mnt/user/isos/jammy-server-cloudimg-amd64.img \
-F qcow2 /mnt/user/domains/MyVM/vdisk1.img 40G
#cloud-config
hostname: myvm
users:
- name: admin
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-ed25519 AAAA... user@host
packages:
- qemu-guest-agent
runcmd:
- systemctl enable --now qemu-guest-agent
instance-id: myvm-001
local-hostname: myvm
genisoimage -output /mnt/user/isos/myvm-cloud-init.iso \
-volid cidata -rational-rock -joliet \
/tmp/user-data /tmp/meta-data
Attach the ISO as a CD-ROM in the VM XML (see the reference template), then:
virsh define /tmp/myvm.xml
virsh start MyVM
The VM boots, applies cloud-init configuration, and is accessible via SSH.
Edit /boot/config/vfio-pci.cfg to bind PCI devices at boot:
BIND=0000:01:00.0|10de:2484
BIND=0000:01:00.1|10de:228b
Format: BIND=<PCI_ADDRESS>|<VENDOR>:<DEVICE_ID>
Edit /boot/syslinux/syslinux.cfg to add IOMMU and device stubbing:
append ... iommu=pt intel_iommu=on vfio-pci.ids=10de:2484,10de:228b
virsh nodedev-detach pci_0000_01_00_0
virsh nodedev-detach pci_0000_01_00_1
Add to the VM XML <devices> section:
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</source>
</hostdev>
for d in /sys/kernel/iommu_groups/*/devices/*; do
n="${d##*/}"; g="${d%/devices/*}"; g="${g##*/}"
echo "IOMMU Group $g: $(lspci -nns "$n")"
done
virt-clone --original=SourceVM --name=CloneVM --auto-clone
Creates independent copies of all disks. The clone can run simultaneously with the source.
qemu-img create -f qcow2 \
-b /mnt/user/domains/SourceVM/vdisk1.img \
-F qcow2 /mnt/user/domains/CloneVM/vdisk1.img
Instant creation, shares base image. Only stores differences. The backing file must remain accessible.
See ${CLAUDE_PLUGIN_ROOT}/skills/vm-provisioning/references/vm-xml-template.md for a complete annotated libvirt domain XML template with UEFI boot, cloud-init, GPU passthrough, and USB passthrough examples.