Back

🏡 Homelab V: Proxmox VMs and cloud-init

3 minute read

In the previous part of this series, I setup a Proxmox dynamic inventory with Ansible and created a basic LXC template for creating containers that were automation ready.

In this part, I’ll setup some cloud-init configs to initalize VMs in a state where they can automatically be managed by Ansible.

Cloud Images

While it’s possible to prepare custom base images for cloud-init, many Linux distributions already provide ready-to-use images, such as Ubuntu, Fedora, Debian, etc. This allows for a (mostly) unified configuration that can work across distros and even some *BSD variants.

Pull a cloud image to storage:

root@r720$ cd /tmp
root@r720$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img

Alternatively, the cloud-init CLI can be used to build custom images.

Template VM

Similar to the process for LXC, image templates will be used for VM creation. But unlike containers, it is not necessary to boot into the VM and manually tweak the environment. The first-time setup can all be done with cloud-init. The Proxmox wiki has some examples, although the documentation is a bit sparse.

First, create the VM using the qm CLI:

root@r720$ qm create 2000 \
--name ubuntu-cloudinit-template \
--net0 virtio,bridge=vmbr0 \
--memory 8192 \
--cores 4

Import the cloud image downloaded earlier (or use a custom one), attach it to the VM, and resize as necessary:

root@r720$ qm importdisk 2000 /tmp/focal-server-cloudimg-amd64.img ssd-mirror --format qcow2
root@r720$ qm set 2000 --scsihw virtio-scsi-pci --scsi0 ssd-mirror:vm-2000-disk-0
root@r720$ qm resize 2000 scsi0 +32G # increase it by 32GB

Add a CDROM drive for cloud-init, and restrict BIOS to boot from disk only to speed up the boot process:

root@r720$ qm set 2000 --ide2 ssd-mirror:cloudinit
root@r720$ qm set 2000 --boot c --bootdisk scsi0

It is recommended to attach a serial console to the VM as well:

root@r720$ qm set 2000 --serial0 socket --vga serial0

And also enable the guest agent which will be installed later by cloud-init:

root@r720 qm set 2000 --agent 1

Cloud config

With the VM ready, it’s time to tweak the cloud config. Proxmox autogenerates configs for the user, network, and meta types, they can be inspected like so:

root@r720$ qm cloudinit dump 2000 user # or `network` or `meta`
#cloud-config
hostname: ubuntu-cloudinit-template
manage_etc_hosts: true
chpasswd:
  expire: False
users:
  - default
package_upgrade: true

In my opinion, the way Proxmox template cloud config is a bit backwards. Instead of autogenerating vendor data, Proxmox uses the user data. This is slightly annoying because if a custom user configuration YAML is attached, it is impossible to have the config autoassign the hostname based on the VM name. Unfortunately, to change the default configuration template it is relatively limited. For any complicated cloud config, I’ll abuse vendor instead. But, it is worth noting that user config takes precedence over vendor, so any values defined in user will not be overwritten.

💡 Custom cloud configs files are stored under the snippets type.

In /rusty-z2/pve/snippets/ubuntu-qemu.yaml:

#cloud-config
user: rob
password: $6$himalayan$E.K7G4g7NoIW69HLpmK1QDU1JMN4aaSYPOOGX1SwoSl.uqr64JruCEeDH0nLi9CxJR1/2HGTnTDVKfCC2ubub1
ssh_import_id:
  - gh:robherley
packages:
  - qemu-guest-agent

This will be doing mostly the same prep work as the container process. It will:

  • Overwrite the default user’s name as rob. This user will also implicitly get an entry in /etc/sudoers.d/90-cloud-init-users for passwordless sudo.
  • Run ssh-import-id to pull public keys from GitHub.
  • Add qemu-guest-agent, a helper daemon used to exchange information between the host and guest.

Add the vendor cloud config to the template VM:

root@r720$ qm set 2000 --cicustom "vendor=rusty-dir:snippets/ubuntu-qemu.yaml"

The network config also needs a slight tweak to use DHCP:

root@r720$ qm set 2000 --ipconfig0 ip=dhcp

And finally, set the VM to be a template:

root@r720$ qm template 2000

To test, just clone some VMs from the template:

root@r720$ qm clone 2000 101 --full --name thing1
root@r720$ qm clone 2000 102 --full --name thing2
root@r720$ qm start 101
root@r720$ qm start 102

These take a bit longer than the containers to spin up, and they need to run through the cloud-init process after boot. After waiting a while, they should be reachable:

root@r720$ ansible proxmox_all_running -m ping
thing2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
thing1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Note: For the qemu-guest-agent to be detected by Proxmox, the VM needs to be stopped by Proxmox (or qm CLI) and restarted.

Next

Automation-ready virtual machine and containers can now be programmatically created. Using the CLI can be a bit of a pain and the web console is fine for one-offs, but this process can be improved with a wonderful tool from HashiCorp called Terraform. In the next part of the series, Terraform will be used as IaC to provision the guests from Proxmox.

Prev Next