cd ../blogs
blog11 min read

Setting Up Proxmox on a Remote Dedicated Server

How I remotely installed Proxmox on a dedicated server without any physical access — just SSH, a virtual machine trick, and a VNC connection.

#proxmox#homelab#qemu#networking#hetzner#devops

Why Proxmox on a Dedicated Server?

Most hosting providers don't offer Proxmox as a one-click install. If you want a full hypervisor with ZFS, clustering, and a web UI on bare metal, you have to install it yourself. The trick is doing it on a remote server you can only reach over SSH — no keyboard, no monitor, no USB stick.

This post walks through the entire process on a Hetzner dedicated server with 64 GB RAM and 4×512 GB NVMe drives.


Step 1 — Boot into Rescue Mode

Hetzner (and most dedicated providers) offer a rescue system — a minimal Linux environment that boots over the network and gives you SSH access to the raw hardware. Nothing is installed yet; the drives are blank or wiped.

From the Hetzner Robot panel:

  1. Activate Rescue Mode (choose Linux 64-bit)
  2. Reboot the server
  3. SSH in using the temporary root credentials Hetzner provides

You're now sitting in a RAM-based Linux environment with direct access to all four NVMe drives.


Step 2 — Install QEMU and Download Proxmox

Since we can't physically plug in a USB drive with the Proxmox ISO, we use QEMU to emulate a machine inside the rescue system and boot the ISO as a virtual CD-ROM — while writing directly to the real NVMe drives.

apt update
apt install -y qemu-system-x86 wget

Download the Proxmox VE ISO:

wget https://enterprise.proxmox.com/iso/proxmox-ve_9.1-1.iso

Step 3 — Check the Boot Mode

Before launching QEMU, check whether the server uses UEFI or legacy BIOS. This determines the flags you pass to QEMU:

[ -d /sys/firmware/efi ] && echo "UEFI" || echo "BIOS"

Most Hetzner dedicated servers boot in BIOS mode. If yours says UEFI, you'll need to add OVMF firmware flags — but the steps below assume BIOS.


Step 4 — Launch the Proxmox Installer via QEMU

This is the core trick. We start a QEMU virtual machine that:

  • Boots from the Proxmox ISO (as a virtual CD-ROM)
  • Has all four physical NVMe drives passed through as virtio disks
  • Exposes a VNC server so we can interact with the GUI installer remotely
qemu-system-x86_64 \
  -enable-kvm \
  -m 8192 \
  -cdrom proxmox-ve_9.1-1.iso \
  -boot d \
  -drive file=/dev/nvme0n1,format=raw,if=virtio \
  -drive file=/dev/nvme1n1,format=raw,if=virtio \
  -drive file=/dev/nvme2n1,format=raw,if=virtio \
  -drive file=/dev/nvme3n1,format=raw,if=virtio \
  -vnc 0.0.0.0:0

Flag breakdown:

FlagPurpose
-enable-kvmUse hardware virtualization (KVM) for near-native speed
-m 8192Allocate 8 GB RAM to the VM
-cdrom ...Mount the Proxmox ISO as a virtual CD drive
-boot dBoot from the CD-ROM drive first
-drive file=/dev/nvmeXn1,...Pass each physical NVMe drive through to the VM
format=raw,if=virtioRaw disk access using the fast virtio driver
-vnc 0.0.0.0:0Start a VNC server on port 5900, accessible from any IP

⚠️ Make sure port 5900 is open in your firewall before launching this command.


Step 5 — Connect via VNC and Run the Installer

Open a VNC client (I used RealVNC Viewer) and connect to:

<your-server-ip>:5900

You'll see the standard Proxmox VE graphical installer. Walk through it:

  1. Accept the EULA
  2. Select the target disks — this is the most important step
  3. Choose the filesystem: Select ZFS RAID10 if you have 4 drives. This gives you both redundancy (mirroring) and performance (striping). You can survive one drive failure per mirror pair.
  4. Set the root password and admin email
  5. Configure networking:
    • IP CIDR: <your-server-ip>/26
    • Gateway: <gateway-ip> (usually ends in .1)
    • DNS: 1.1.1.1 (Cloudflare) or 8.8.8.8 (Google)
  6. Check "Automatically reboot after setup" and confirm

The installer writes directly to your physical NVMe drives. When it finishes, the VM will reboot — but since we're still inside QEMU, we need to handle the next boot ourselves.


Step 6 — Boot from the Installed Drives

After the installer reboots, go back to your SSH session and press Ctrl+C to kill the QEMU process.

Now we need to find the real network interface name of the server. Run:

ip addr

Look for the altname field — it will be something like enp41s0. Note this down; you'll need it in the next step.

Now boot QEMU again, but this time without the ISO — so it boots from the drives where Proxmox was just installed:

qemu-system-x86_64 \
  -enable-kvm \
  -m 8192 \
  -k en-us \
  -drive file=/dev/nvme0n1,format=raw,if=virtio \
  -drive file=/dev/nvme1n1,format=raw,if=virtio \
  -drive file=/dev/nvme2n1,format=raw,if=virtio \
  -drive file=/dev/nvme3n1,format=raw,if=virtio \
  -vnc 0.0.0.0:0

Notice: no -cdrom and no -boot d. QEMU will boot from the first drive — which now has Proxmox installed on it.


Step 7 — Fix the Network Interface Name

Reconnect via VNC. You'll see the Proxmox login prompt. Log in with root and the password you set during installation.

The installer configured networking using a generic interface name (like net0), but the real hardware interface has a different name. We need to fix this:

nano /etc/network/interfaces

Find the line referencing net0 and replace it with your actual interface name (e.g., enp41s0).

Save the file, then shut down the VM:

shutdown now

Close VNC viewer.


Step 8 — Reboot into Real Proxmox

Go back to your SSH session and reboot the server:

reboot

This time the server boots natively from the NVMe drives — no QEMU involved. Proxmox is now running on bare metal.

Wait about 2 minutes, then verify:

ping <your-server-ip>

If it responds, you're live.


Step 9 — Initial Proxmox Configuration

Access the Proxmox web UI at:

https://<your-server-ip>:8006

Log in with root and your password. Then open a shell and run the initial updates:

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

If you hit repo errors related to Ceph, remove the stale source list:

rm -rf /etc/apt/sources.list.d/ceph.list

Step 10 — Fix VM Internet Access (Network Interfaces)

If ping google.com doesn't work from inside a VM, you likely need to:

  1. Add a firewall rule for inbound UDP on ports 32768–65535
  2. Update /etc/network/interfaces with a proper bridge configuration

Here's the full interface config that works for Hetzner:

auto lo
iface lo inet loopback
 
iface enp41s0 inet manual
 
auto vmbr0
iface vmbr0 inet static
    address <your-ip>/26
    gateway <gateway-ip>
    bridge-ports enp41s0
    bridge-stp off
    bridge-fd 0
 
# Internal NAT network for VMs
auto vmbr1
iface vmbr1 inet static
    address 192.168.100.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o vmbr0 -j MASQUERADE
 
# Optional: IPv6 support (Hetzner provides this)
iface vmbr0 inet6 static
    address <your-ipv6>/64
    gateway fe80::1
 
source /etc/network/interfaces.d/*

Apply the changes:

systemctl restart networking

Then inside each VM, set a static IP in the 192.168.100.x range with gateway 192.168.100.1.


Tools & Concepts Explained

QEMU (Quick Emulator)

QEMU is an open-source machine emulator and virtualizer. It can emulate an entire computer — CPU, memory, disks, network — in software, and when combined with KVM (Kernel-based Virtual Machine), it runs guest operating systems at near-native speed by leveraging hardware virtualization extensions (Intel VT-x / AMD-V).

In this guide, QEMU serves an unconventional purpose: instead of creating virtual disks, we pass the server's real physical drives directly into the emulated machine. This lets us boot the Proxmox graphical installer inside the rescue environment and have it write directly to the NVMe drives — as if we'd plugged in a USB stick and booted from it.

Key QEMU features used here:

  • -enable-kvm — Activates KVM acceleration. Without this, QEMU falls back to pure software emulation, which is dramatically slower.
  • -drive file=/dev/nvmeXn1,format=raw,if=virtio — Passes a raw block device (physical drive) into the VM using the virtio paravirtualized driver, which avoids the overhead of emulating legacy IDE/SATA controllers.
  • -vnc 0.0.0.0:0 — Starts a VNC server so you can connect to the VM's display remotely. Display :0 maps to TCP port 5900.
  • -cdrom — Mounts an ISO image as a virtual CD-ROM drive, used here to boot the Proxmox installer.

KVM (Kernel-based Virtual Machine)

KVM is a Linux kernel module that turns the host kernel into a hypervisor. It allows QEMU to delegate CPU and memory operations to the physical hardware rather than emulating them in software. When you see -enable-kvm, that's what unlocks the performance — the guest OS runs on the real CPU cores with minimal overhead.

VNC (Virtual Network Computing)

VNC is a remote desktop protocol that gives you graphical access to a machine's display. In this setup, QEMU acts as a VNC server — it renders the Proxmox installer's GUI and streams it over the network to your VNC client. This is how you interact with a graphical installer on a headless server thousands of miles away.

ZFS RAID10

ZFS is a combined filesystem and volume manager originally from Sun Microsystems. RAID10 (also written as RAID 1+0) stripes data across mirrored pairs:

Drive 0 ←mirror→ Drive 1    (pair A)
Drive 2 ←mirror→ Drive 3    (pair B)
         ↕ striped ↕
  • Redundancy: Each pair is a mirror. You can lose one drive per pair without data loss.
  • Performance: Reads and writes are distributed across both pairs (striping).
  • Capacity: You get ~50% of total raw capacity (2× 512 GB usable from 4× 512 GB).
  • Why ZFS: ZFS adds checksumming, snapshots, compression, and self-healing on top of the RAID. It's built into Proxmox and is the recommended filesystem for production use.

Network Interface Configuration Explained

The /etc/network/interfaces file defines how the server's networking is set up. Here's what each block does:

vmbr0 — The Public Bridge

auto vmbr0
iface vmbr0 inet static
    address <ip>/26
    gateway <gateway>
    bridge-ports enp41s0
    bridge-stp off
    bridge-fd 0

This is a Linux bridge that connects the physical NIC (enp41s0) to the Proxmox host. Think of it like a virtual network switch:

  • bridge-ports enp41s0 — The physical interface is "plugged into" this virtual switch. All traffic from the server flows through vmbr0, not directly through enp41s0.
  • bridge-stp off — Disables Spanning Tree Protocol. STP prevents loops in complex multi-switch networks, but with a single bridge it adds unnecessary delay.
  • bridge-fd 0 — Sets forwarding delay to zero, so the bridge starts forwarding packets immediately instead of waiting the default 15 seconds.

The host's public IP is assigned to this bridge, so Proxmox itself is reachable on it.

vmbr1 — The Internal NAT Network

auto vmbr1
iface vmbr1 inet static
    address 192.168.100.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o vmbr0 -j MASQUERADE

This is a private, internal bridge with no physical interface attached (bridge-ports none). VMs connect to this bridge and get IPs in the 192.168.100.0/24 range.

  • post-up echo 1 > /proc/sys/net/ipv4/ip_forward — Enables IP forwarding on the host, turning it into a router. Without this, packets from VMs would be dropped instead of being forwarded to the internet.
  • post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o vmbr0 -j MASQUERADE — Sets up NAT masquerading. When a VM with IP 192.168.100.5 makes a request to the internet, this rule rewrites the source address to the host's public IP. The internet sees the traffic as coming from the host, and replies are routed back through the host to the correct VM.

This is the same technique your home router uses — it lets multiple devices (VMs) share a single public IP address.

Why Two Bridges?

  • vmbr0 gives the host (Proxmox itself) a public IP and internet access.
  • vmbr1 gives VMs internet access through NAT without needing additional public IPs, which most providers charge extra for.

VMs are assigned static IPs like 192.168.100.2, 192.168.100.3, etc., with 192.168.100.1 (the host) as their gateway.


Result

Proxmox VE is now running on bare metal with ZFS RAID10 across all four NVMe drives. The web UI is accessible on port 8006, VMs have internet access through NAT on an internal bridge, and the whole setup survives reboots cleanly. Total time from rescue mode to a working hypervisor: about 30 minutes.