Proxmox VE cluster management including VM provisioning, template creation with cloud-init, QEMU guest agent integration, storage pool management, VLAN-aware bridge configuration, and Proxmox API interactions. Use when working with Proxmox VE, creating VM templates, configuring Proxmox networking, managing CEPH storage, troubleshooting VM deployment issues, or interacting with Proxmox API.
Manages Proxmox VE infrastructure including VM provisioning, template creation with cloud-init, VLAN-aware networking, and CEPH storage. Use when creating VM templates, configuring Proxmox clusters, or troubleshooting VM deployment issues.
/plugin marketplace add basher83/lunar-claude/plugin install proxmox-infrastructure@lunar-claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
anti-patterns/common-mistakes.mdexamples/01-basic-vm/README.mdexamples/01-basic-vm/main.tfexamples/01-basic-vm/variables.tfreference/api-reference.mdreference/cloud-init-patterns.mdreference/networking.mdreference/qemu-guest-agent.mdreference/storage-management.mdtools/check_ceph_health.pytools/check_cluster_health.pytools/cluster_status.pytools/validate_template.pyworkflows/ceph-deployment.mdworkflows/cluster-formation.mdExpert guidance for managing Proxmox VE clusters, creating templates, provisioning VMs, and configuring network infrastructure.
Create VM Template:
# See tools/build-template.yml for automated playbook
cd ansible && uv run ansible-playbook playbooks/proxmox-build-template.yml
Clone Template to VM:
qm clone <template-id> <new-vmid> --name <vm-name>
qm set <new-vmid> --sshkey ~/.ssh/id_rsa.pub
qm set <new-vmid> --ipconfig0 ip=192.168.1.100/24,gw=192.168.1.1
qm start <new-vmid>
Check Cluster Status:
# Use tools/cluster_status.py
./tools/cluster_status.py
Activate this skill when:
See tools/build-template.yml for complete automation.
See reference/cloud-init-patterns.md for detailed steps.
Key points:
virtio-scsi-pci controller for Ubuntu imageside2)qm template <vmid>From Ansible: Analyze existing playbook: ../../ansible/playbooks/proxmox-build-template.yml
From Terraform: See examples in ../../terraform/netbox-vm/
Key Configuration:
# Ansible example
proxmox_kvm:
node: foxtrot
api_host: 192.168.3.5
vmid: 101
name: docker-01
clone: ubuntu-template
storage: local-lvm
# Network with VLAN
net:
net0: 'virtio,bridge=vmbr0,tag=30'
ipconfig:
ipconfig0: 'ip=192.168.3.100/24,gw=192.168.3.1'
This Virgo-Core cluster uses:
See reference/networking.md for:
Nodes: Foxtrot, Golf, Hotel (3× MINISFORUM MS-A2)
Hardware per Node:
Network Architecture:
enp4s0 → vmbr0 (mgmt + vlan9 for corosync)
enp5s0f0np0 → vmbr1 (ceph public, MTU 9000)
enp5s0f1np1 → vmbr2 (ceph private, MTU 9000)
See ../../docs/goals.md for complete specs.
validate_template.py - Validate template health via API
./tools/validate_template.py --template-id 9000
vm_diagnostics.py - VM health checks
./tools/vm_diagnostics.py --vmid 101
cluster_status.py - Cluster health metrics
./tools/cluster_status.py
build-template.yml - Automated template creation
configure-networking.yml - VLAN bridge setup
vm-module-example/ - Reusable VM provisioning
See examples/ directory.
Real Examples from Repository:
for_each pattern, cross-node cloning, dual NIC with VLAN (VLAN 30 primary,
VLAN 2 secondary), Ansible integrationCommon issues and solutions:
Serial console required: Many cloud images need serial console configured.
qm set <vmid> --serial0 socket --vga serial0
Boot order:
qm set <vmid> --boot order=scsi0
VLAN not working:
Check bridge is VLAN-aware:
grep "bridge-vlan-aware" /etc/network/interfaces
Verify VLAN in bridge-vids:
bridge vlan show
MTU problems (CEPH): Ensure MTU 9000 on storage networks:
ip link show vmbr1 | grep mtu
Check QEMU guest agent:
qm agent <vmid> ping
Review cloud-init logs (in VM):
cloud-init status --wait
cat /var/log/cloud-init.log
Validate template exists:
qm list | grep template
For more issues, see troubleshooting/ directory.
For deeper knowledge: