Most hosting providers don't offer Proxmox as a one-click install. If you want a full hypervisor with ZFS, clustering, and a web UI on bare metal, you have to install it yourself. The trick is doing it on a remote server you can only reach over SSH — no keyboard, no monitor, no USB stick.
This post walks through the entire process on a Hetzner dedicated server with 64 GB RAM and 4×512 GB NVMe drives.
Hetzner (and most dedicated providers) offer a rescue system — a minimal Linux environment that boots over the network and gives you SSH access to the raw hardware. Nothing is installed yet; the drives are blank or wiped.
From the Hetzner Robot panel:
Activate Rescue Mode (choose Linux 64-bit)
Reboot the server
SSH in using the temporary root credentials Hetzner provides
You're now sitting in a RAM-based Linux environment with direct access to all four NVMe drives.
Since we can't physically plug in a USB drive with the Proxmox ISO, we use QEMU to emulate a machine inside the rescue system and boot the ISO as a virtual CD-ROM — while writing directly to the real NVMe drives.
Open a VNC client (I used RealVNC Viewer) and connect to:
<your-server-ip>:5900
You'll see the standard Proxmox VE graphical installer. Walk through it:
Accept the EULA
Select the target disks — this is the most important step
Choose the filesystem: Select ZFS RAID10 if you have 4 drives. This gives you both redundancy (mirroring) and performance (striping). You can survive one drive failure per mirror pair.
Set the root password and admin email
Configure networking:
IP CIDR:<your-server-ip>/26
Gateway:<gateway-ip> (usually ends in .1)
DNS:1.1.1.1 (Cloudflare) or 8.8.8.8 (Google)
Check "Automatically reboot after setup" and confirm
The installer writes directly to your physical NVMe drives. When it finishes, the VM will reboot — but since we're still inside QEMU, we need to handle the next boot ourselves.
Reconnect via VNC. You'll see the Proxmox login prompt. Log in with root and the password you set during installation.
The installer configured networking using a generic interface name (like net0), but the real hardware interface has a different name. We need to fix this:
nano /etc/network/interfaces
Find the line referencing net0 and replace it with your actual interface name (e.g., enp41s0).
QEMU is an open-source machine emulator and virtualizer. It can emulate an entire computer — CPU, memory, disks, network — in software, and when combined with KVM (Kernel-based Virtual Machine), it runs guest operating systems at near-native speed by leveraging hardware virtualization extensions (Intel VT-x / AMD-V).
In this guide, QEMU serves an unconventional purpose: instead of creating virtual disks, we pass the server's real physical drives directly into the emulated machine. This lets us boot the Proxmox graphical installer inside the rescue environment and have it write directly to the NVMe drives — as if we'd plugged in a USB stick and booted from it.
Key QEMU features used here:
-enable-kvm — Activates KVM acceleration. Without this, QEMU falls back to pure software emulation, which is dramatically slower.
-drive file=/dev/nvmeXn1,format=raw,if=virtio — Passes a raw block device (physical drive) into the VM using the virtio paravirtualized driver, which avoids the overhead of emulating legacy IDE/SATA controllers.
-vnc 0.0.0.0:0 — Starts a VNC server so you can connect to the VM's display remotely. Display :0 maps to TCP port 5900.
-cdrom — Mounts an ISO image as a virtual CD-ROM drive, used here to boot the Proxmox installer.
KVM is a Linux kernel module that turns the host kernel into a hypervisor. It allows QEMU to delegate CPU and memory operations to the physical hardware rather than emulating them in software. When you see -enable-kvm, that's what unlocks the performance — the guest OS runs on the real CPU cores with minimal overhead.
VNC is a remote desktop protocol that gives you graphical access to a machine's display. In this setup, QEMU acts as a VNC server — it renders the Proxmox installer's GUI and streams it over the network to your VNC client. This is how you interact with a graphical installer on a headless server thousands of miles away.
ZFS is a combined filesystem and volume manager originally from Sun Microsystems. RAID10 (also written as RAID 1+0) stripes data across mirrored pairs:
Drive 0 ←mirror→ Drive 1 (pair A)
Drive 2 ←mirror→ Drive 3 (pair B)
↕ striped ↕
Redundancy: Each pair is a mirror. You can lose one drive per pair without data loss.
Performance: Reads and writes are distributed across both pairs (striping).
Capacity: You get ~50% of total raw capacity (2× 512 GB usable from 4× 512 GB).
Why ZFS: ZFS adds checksumming, snapshots, compression, and self-healing on top of the RAID. It's built into Proxmox and is the recommended filesystem for production use.
auto vmbr0
iface vmbr0 inet static
address <ip>/26
gateway <gateway>
bridge-ports enp41s0
bridge-stp off
bridge-fd 0
This is a Linux bridge that connects the physical NIC (enp41s0) to the Proxmox host. Think of it like a virtual network switch:
bridge-ports enp41s0 — The physical interface is "plugged into" this virtual switch. All traffic from the server flows through vmbr0, not directly through enp41s0.
bridge-stp off — Disables Spanning Tree Protocol. STP prevents loops in complex multi-switch networks, but with a single bridge it adds unnecessary delay.
bridge-fd 0 — Sets forwarding delay to zero, so the bridge starts forwarding packets immediately instead of waiting the default 15 seconds.
The host's public IP is assigned to this bridge, so Proxmox itself is reachable on it.
This is a private, internal bridge with no physical interface attached (bridge-ports none). VMs connect to this bridge and get IPs in the 192.168.100.0/24 range.
post-up echo 1 > /proc/sys/net/ipv4/ip_forward — Enables IP forwarding on the host, turning it into a router. Without this, packets from VMs would be dropped instead of being forwarded to the internet.
post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o vmbr0 -j MASQUERADE — Sets up NAT masquerading. When a VM with IP 192.168.100.5 makes a request to the internet, this rule rewrites the source address to the host's public IP. The internet sees the traffic as coming from the host, and replies are routed back through the host to the correct VM.
This is the same technique your home router uses — it lets multiple devices (VMs) share a single public IP address.
Proxmox VE is now running on bare metal with ZFS RAID10 across all four NVMe drives. The web UI is accessible on port 8006, VMs have internet access through NAT on an internal bridge, and the whole setup survives reboots cleanly. Total time from rescue mode to a working hypervisor: about 30 minutes.