« Previous 1 2
Automating deployments on Proxmox with OpenTofu and cloud-init
Go-Between
Putting Everything Together
My plan is to use OpenTofu to create my virtual machines and then cloud-init
to configure them. As stated, the goal of this article is to deploy a reverse proxy called rproxy
and a number of web servers in the configuration shown in Figure 3. I usually let the virtual machines get their IPs from the router over DHCP, because OpenTofu is capable of interfacing with certain routers to set static DNS entries and DHCP reservations; however, for the sake of simplicity, I will skip the router configuration and have cloud-init
set static IP addresses instead.

The first step toward this goal is to set up a Proxmox VE host. Proxmox can be installed according to the official documentation [10], and it isn't more complex than installing any other Linux distribution. For this example, I use a humble desktop computer made from cannibalized parts with 8GB of RAM and an i3 CPU, which is fine for toying around. For long-term usage, though, it is much better to use server hardware. At the office, we use PowerEdge T40 servers in tower form as an inexpensive option.
Once your Proxmox is up and running, you need to configure its storage to accept snippets
, which is a fancy way of saying you are allowing Proxmox to store extraneous files that are needed so OpenTofu can upload cloud-init
configuration files to the host (Figure 4).

Connect to your Proxmox server over SSH and issue the commands in Listing 1 to download a cloud-init
-capable OpenBSD image and create a template from it. OpenTofu will use this template as a base for every virtual machine you want to create.
Listing 1
Create Template for VMs
# Download the image. https://bsd-cloud-image.org/ is used as a source mkdir tmpimages cd tmpimages wget https://github.com/hcartiaux/openbsd-cloud-image/releases/download/v7.5_2024-05-13-15-25/openbsd-min.qcow2 # Create a virtual machine and assign the downloaded image to it qm create 100 --name openbsd-master --memory 1024 --agent 1,type=isa --scsihw virtio-scsi-single --boot order='scsi0' --net0 virtio,bridge=vmbr0 --serial0 socket --vga serial0 qm set 100 --scsi0 local-lvm:0,import-from=/root/tmpimages/openbsd-min.qcow2 # Turn the VM into a template qm template 100
The steps in Listing 1 are mostly self-explanatory, but some of the arguments passed to qm create
are worth a comment:
--agent 1,type=isa
tells the Proxmox host to communicate with the guest with a Qemu Agent , which is a daemon that runs inside the guest and accepts instructions from Proxmox when Proxmox needs to perform tasks such as turning off the guest. Communication will occur over a virtual ISA serial port.--scsihw virtio-scsi-single
defines the virtual controller for the virtual hard drive of the guest.--boot order='scsi0'
ensures the virtual machine boots from its associated virtual drive.--serial0 socket
and--vga serial0
create a serial interface that acts pretty much like a serial terminal.
The next step is to install OpenTofu on your workstation. The generic installer from the OpenTofu website can be used as shown in Listing 2 on a Devuan system. Keep in mind, your user needs to be granted sudo privileges before running the installer (e.g., with visudo
).
Listing 2
Installing OpenTofu on Devuan
# Download the installer and mark it as executable wget https://get.opentofu.org/install-opentofu.sh chmod +x install-opentofu # Execute the installer. OpenTofu will be downloaded and the repositories from the OpenTofu project will be added to your sources.list ./install-opentofu.sh --install-method deb
A project folder must be created and populated. It is common practice to put the project folder under version control (with Git, CVS, or a similar tool), but this is not necessary. For this example, I create a folder called tofu_project
and populate it with the files main.tf
(Listing 3), machines.tf
(Listing 4), and cloud-init.tf
(Listing 5).
Listing 3
tofu_project/main.tf
01 terraform { 02 required_providers { 03 proxmox = { 04 source = "bpg/proxmox" 05 version = "0.64.0" 06 } 07 } 08 } 09 10 provider "proxmox" { 11 endpoint = "https://192.168.3.15:8006/" 12 username = "root@pam" 13 password = "proxmox" 14 insecure = true 15 tmp_dir = "/var/tmp" 16 17 ssh { 18 agent = true 19 } 20 }
Listing 4
tofu_project/machines.tf
01 resource "proxmox_virtual_environment_vm" "rproxy" { 02 name = "rproxy" 03 description = "Reverse proxy" 04 node_name = "proxmox" 05 06 clone { 07 vm_id = 100 08 } 09 10 network_device { 11 model = "virtio" 12 bridge = "vmbr0" 13 } 14 15 depends_on = [ 16 proxmox_virtual_environment_file.rproxy_cloud_config, 17 proxmox_virtual_environment_file.network_cloud_config, 18 ] 19 20 initialization { 21 user_data_file_id = proxmox_virtual_environment_file.rproxy_cloud_config.id 22 network_data_file_id = proxmox_virtual_environment_file.network_cloud_config[0].id 23 } 24 } 25 26 resource "proxmox_virtual_environment_vm" "webserver" { 27 count = "3" 28 name = "webserver-${count.index+1}" 29 description = "Generic web server" 30 node_name = "proxmox" 31 32 clone { 33 vm_id = 100 34 } 35 36 network_device { 37 model = "virtio" 38 bridge = "vmbr0" 39 } 40 41 depends_on = [ 42 proxmox_virtual_environment_file.webserver_cloud_config, 43 proxmox_virtual_environment_file.network_cloud_config, 44 proxmox_virtual_environment_vm.rproxy, 45 ] 46 initialization { 47 user_data_file_id = proxmox_virtual_environment_file.webserver_cloud_config.id 48 network_data_file_id = proxmox_virtual_environment_file.network_cloud_config[count.index+1].id 49 } 50 }
Listing 5
tofu_project/cloud-init.tf
01 resource "proxmox_virtual_environment_file" "rproxy_cloud_config" { 02 content_type = "snippets" 03 datastore_id = "local" 04 node_name = "proxmox" 05 06 source_file { 07 path = "rproxy.yml" 08 file_name = "user_data_vm-rproxy.yml" 09 } 10 } 11 12 resource "proxmox_virtual_environment_file" "webserver_cloud_config" { 13 content_type = "snippets" 14 datastore_id = "local" 15 node_name = "proxmox" 16 17 source_file { 18 path = "webserver.yml" 19 file_name = "user_data_vm-webserver.yml" 20 } 21 } 22 23 resource "proxmox_virtual_environment_file" "network_cloud_config" { 24 count = 4 25 content_type = "snippets" 26 datastore_id = "local" 27 node_name = "proxmox" 28 29 source_raw { 30 data = templatefile("network.tftpl", {myip = "192.168.3.${count.index+30}"}) 31 file_name = "network_data_vm-${count.index}.yml" 32 } 33 }
The main.tf
file defines which providers and which versions (if need be) are used for the project. The provider I selected, bpg/proxmox
, connects to the host over SSH to create automatically the environment defined by the project. OpenTofu can also use the regular Proxmox API with this provider. The configuration parameters passed to the Proxmox provider in lines 10-19 in Listing 3 are self-explanatory. Keep in mind that hardcoding credentials in the project file is fine for testing, but for a production environment you should consider passing the username and password with the environment variables PROXMOX_VE_USERNAME
and PROXMOX_VE_PASSWORD
[11].
The machines.tf
file contains information about the virtual machines you want to create. OpenTofu creates new virtual machines by cloning from the template defined earlier. In the example, a proxmox_virtual_environment_vm
resource named rproxy
is created (Listing 4, lines 1-24), whose task will be to act as a reverse proxy for the web servers. The next block in the file defines a resource named webserver
(lines 26-50) with a count = "3"
parameter, so three instances are created with names webserver-1
, webserver-2
, and webserver-3
. The rproxy
resource is defined as a dependency for the web servers in line 44 to ensure it is created before the rest of the machines.
You will notice some proxmox_virtual_environment_file
resources are declared as hard dependencies for the virtual machines to ensure they are not created without a proper cloud-init
configuration in place. I have already mentioned that Proxmox's capabilities for setting cloud-init
parameters from the GUI are very limited; thankfully, you can leverage the full power of cloud-init
with cicustom
files that contain all the cloud-init
parameters you want to pass to each of your virtual machines, and they are uploaded to the Proxmox VE host before each instance is created.
The cloud-init.tf
file lists the cicustom
files that will be uploaded to Proxmox. A cloud-init
configuration file for rproxy
is defined in lines 1-10 in Listing 5 and another in lines 12-21 that is applied to all the web servers. These files are stored in the project folder and are uploaded to the server by OpenTofu just before the virtual machines are created. You can see both files in Listings 6 and 7. (See the "Different cloud-init Files" box.)
Listing 6
tofu_project/rproxy.yml
01 #cloud-config 02 users: 03 - name: openbsd 04 gecos: openbsd 05 groups: wheel 06 plain_text_passwd: openbsd 07 lock_passwd: false 08 doas: [permit nopass openbsd] 09 10 write_files: 11 - path: /etc/relayd.conf 12 content: | 13 table <webservers> { 192.168.3.31 192.168.3.32 192.92.168.33 } 14 http protocol "http" { 15 match request header set "X-Forwarded-For" value "$REMOTE_ADDR" 16 match request header set "X-Forwarded-Port" value "$SERVER_PORT" 17 match request header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT" 18 } 19 20 relay "webservice" { 21 listen on 192.168.3.30 port 80 22 protocol "http" 23 forward to <webservers> port 80 mode loadbalance \ 24 check http "/index.html" code 200 25 } 26 owner: 'root:wheel' 27 permissions: '0644' 28 defer: true 29 30 runcmd: 31 - pkg_add qemu-ga 32 - rcctl enable qemu_ga 33 - echo 'qemu_ga_flags="-t /var/run/qemu-ga -m isa-serial -p /dev/cua01 -f /var/run/qemu-ga/qemu-ga.pid"' >> /etc/rc.conf.local 34 - rcctl start qemu_ga 35 - rcctl enable relayd 36 - rcctl start relayd
Listing 7
tofu_project/webserver.yml
01 #cloud-config 02 users: 03 - name: openbsd 04 gecos: openbsd 05 groups: wheel 06 plain_text_passwd: openbsd 07 lock_passwd: false 08 doas: [permit nopass openbsd] 09 10 write_files: 11 - path: /etc/httpd.conf 12 content: | 13 server "default" { 14 listen on * port 80 15 } 16 - path: /var/www/htdocs/index.html 17 content: | 18 <!DOCTYPE html> 19 <html lang="en"> 20 <head> 21 <meta charset="utf-8"> 22 <title>Hello, World</title> 23 </head> 24 <body> 25 <h1>Hello, world!</h1> 26 <p>This is an example file</p> 27 </body> 28 </html> 29 30 runcmd: 31 - pkg_add qemu-ga 32 - rcctl enable qemu_ga 33 - echo 'qemu_ga_flags="-t /var/run/qemu-ga -m isa-serial -p /dev/cua01 -f /var/run/qemu-ga/qemu-ga.pid"' >> /etc/rc.conf.local 34 - rcctl start qemu_ga 35 - rcctl enable httpd 36 - rcctl start httpd
Different cloud-init Files
You will notice I use two separate types of cloud-init
files to supply configuration data to the virtual machines: user data files
and network data files
. cloud-init
supports three configuration levels and you may use a cicustom
file for each:
- The vendor file is used to provide the virtual machine with the configuration chosen by the vendor. It is used by cloud providers to set a baseline for all the instances that run on the platform. If you rent a virtual private server (VPS) from a cloud operator, your image could come preconfigured at the vendor level.
- The user file contains settings specified by the person who requested the machine. When you rent a VPS, you get to choose things like your password, for example. User-specific data is intended to be supplied at the user level.
- The network file, as you might have guessed, is used to configure the virtual machines' networking.
The cloud-init
files in this article are YAML files. Note that it is imperative to avoid the use of tabs for indentation, because tabs, when used instead of spaces, will break the files.
The users
directive in each file commands the creation of user openbsd
with password openbsd
(see the "How User Creation Works" box). The write_files
directive instructs cloud-init
to place some configuration files that the services will need. Finally, the runcmd
directive lists all the commands cloud-init
will run on first boot to install, enable, configure, and start qemu-agent
and then enable and start the main service for the virtual machine with rcctl
(the OpenBSD command for managing services).
How User Creation Works
The users
directive in cloud-init
is not very intuitive, so it deserves some explanation.
To begin, name
defines the username of the new user, gecos
defines the "real name," and group
adds the openbsd
user to the wheel
group, which is a group with certain administrative rights (e.g., the ability to become the superuser with the use of su
, if they have a valid password and root happens to be unlocked).
The cloud-init
utility creates users with locked passwords by default to force users to adopt SSH key authentication instead of regular passwords. You are supposed to use the ssh_authorized_keys
parameter to provide your SSH public key. Because this article is didactic in nature, I have chosen to simplify, set a password with plain_text_password
, and unlock it with lock_passwd: false
. Keeping credentials in plain text in your project folder is considered a bad idea for production, so keep this in mind while you play around with OpenTofu.
The doas
configuration in the examples might seem alien to Linux users; it is the OpenBSD equivalent of sudo
. The doas
parameter in my cloud-init
files grants the openbsd
user some doas
rights. Think of it as granting a regular Linux user the ability to use sudo
. Although sudo
is available in OpenBSD, OpenBSD users tend to favor doas
.
The configuration of the network (Listing 5, lines 23-33) is a bit tricky and is accomplished by templates (line 30). A network_cloud_config
resource is created with count = 4
, so OpenTofu creates a network cloud-init
file for each of the virtual machines. These files are named network_data_vm-0.yml
, network_data_vm-1.yml
, and so on. The files resulting from the network.tftpl
template (Listing 8) are uploaded to Proxmox and assigned the virtual machine IP addresses from 192.168.3.30 onward.
Listing 8
tofu_project/network.tftpl
network: version: 2 ethernets: vio0: addresses: - ${myip}/24 gateway4: 192.168.3.1 nameservers: addresses: - 192.168.3.1
The user_data_file_id
and network_data_file_id
parameters for each machine in Listing 4 ensure that Proxmox loads the corresponding cloud-init
files when creating each virtual machine. The rproxy
resource loads network_data_vm-0.yml
, and the web servers each load a file from network_data_vm-1.yml
onward.
Time for Deployment
With all of the code in place, it is time to initialize OpenTofu and command it to perform the deployment. First, the initialization command installs all the modules required by the project (Figure 5):
# tofu init
Second, ensure OpenTofu will do the right thing when you request your systems be deployed by running the following command from the project folder (Figure 6):
# tofu plan

Final deployment could take some time, but at this point it will be fully automated. Just issue
# tofu apply --parallelism=1
and watch OpenTofu work its magic. In this example, I use --parallelism=1
because my testing hardware is weak, and I want to limit OpenTofu to a single concurrent operation. For regular server hardware, it can be omitted. When you get bored of your testing environment, you can trash it with tofu destroy
.
Conclusion
Automating deployments is an involved process, but once the process is completed, it will save you a lot of time. OpenTofu, in combination with cloud-init
, is a good option for automating deployments on Proxmox.
The Proxmox provider used in this article comes from bpg , but it is worth noticing that Telmate [12] also has a provider. Both have limitations that are hit when you attempt complex tasks (e.g., leveraging Proxmox's native high-availability mechanisms or using its native firewall systems). Still, what you can accomplish with currently existing code is impressive.
Combining providers can be a very powerful proposition. For example, if you have a MikroTik router, you can use OpenTofu to add and remove DHCP leases and DNS entries at the same time you deploy your virtual machines. You can also use it to set firewall rules in the router. Although the example shown in this article is quite basic, once you become proficient with deployment automation, the sky is the limit.
The code in this article has been kept simple for didactic reasons and has much room for improvement. For example, the number of web servers deployed is hardcoded, instead of taken from a user-defined variable. The cloud-init
user file for rproxy
also has the web servers' IPs hardcoded, instead of being automatically defined. The way the network configuration is assigned to each virtual machine is a bit fragile. If you feel like solving these issues, the official documentation will be useful [13].
Infos
- "Automatically Install and Configure Systems" by Martin Loschwitz, ADMIN , issue 52, 2019, pg. 62, https://www.admin-magazine.com/Archive/2019/52/Automatically-install-and-configure-systems
- Unattended OpenBSD installation and upgrade: https://man.openbsd.org/autoinstall.8
- preseed: https://wiki.debian.org/DebianInstaller/Preseed
- Kickstart installation at AlmaLinux wiki: https://wiki.almalinux.org/documentation/kickstart-guide.html
- "Virtualization with the Proxmox Virtual Environment 2.2" by Martin Loschwitz, Linux Magazine , issue 150, May 2013, pg. 22, https://www.linux-magazine.com/Issues/2013/150/Proxmox-VE
- "Proxmox Virtualization Manager" by Martin Loschwitz, ADMIN , issue 42, 2017, pg. 58, https://www.admin-magazine.com/Archive/2017/42/Proxmox-virtualization-manager
- "Broadcom's Stated Strategy Ignores Most VMware Customers" by Simon Sharwood, The Register , May 2022, https://www.theregister.com/2022/05/30/broadcom_strategy_vmware_customer_impact/
- "Infrastructure as Code with Terraform Blueprint" by Christian Rost, ADMIN , issue 43, 2018, pg. 42, https://www.admin-magazine.com/Archive/2018/43/Infrastructure-as-Code-with-Terraform
- A collection of prebuilt BSD cloud images: https://bsd-cloud-image.org/
- Installing Proxmox VE: https://pve.proxmox.com/pve-docs/chapter-pve-installation.html
- Environment variables with bgp/proxmox: https://github.com/bpg/terraform-provider-proxmox/blob/main/docs/index.md#environment-variables
- Telmate Proxmox provider: https://github.com/Telmate/terraform-provider-proxmox
- OpenTofu documentation: https://opentofu.org/docs/
« Previous 1 2
Buy this article as PDF
(incl. VAT)