Prxomox Install

Prepare a USB drive with the latest ISO image from the Proxmox repository.


Install Service

  1. Select Install Proxmox VE

  2. Accept EULA

  3. Select installation drive with default options

  4. Set country, timezone, and keyboard layout

  5. Enter root password & valid email address

  6. Set network interface. Use static settings and FQDN hostname

  7. Start installation


The correct FQDN must be used as changing the hostname afterwards is error prone and not recommended for clustering.

Access the default webface for proxmox at https://{HOST}:8006.



The default installation is open, insecure, and must be configured. Configuration management is not used to minimize attack surface and resource consumption. Reboot server after steps are completed to ensure changes are applied.


Enable Automatic & Non-subscription Updates

Only changed or added lines are shown for files in this section.

apt install vim unattended-upgrades

Add the non-subscription repository.

0644 root root /etc/apt/sources.list
deb {DEBIAN CODENAME} pve-no-subscription

Remove the subscription repository.

0644 root root /etc/apt/sources.list.d/pve-enterprise.list
#deb {DEBIAN CODENAME} pve-enterprise

Enable automatic updates.

0644 root root /etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

Enable unattended upgrades (only changed lines shown). Proxmox servers should be rebooted at different times.

0644 root root /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Origins-Pattern {

Unattended-Upgrade::Mail "root";
Unattended-Upgrade::MailOnlyOnError "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
Acquire::http::Dl-Limit "0";
Validate unattended-upgrades configuration.
unattended-upgrade -d
Upgrade server to latest patches.
apt update && apt upgrade && apt dist-upgrade

Add Local User, Sudo, & Secure SSH

Proxmox requires root SSH for cluster communications. This uses public key authentication, so disable password authentication. Add a local user for primary login and sudo configuration use.

Add a local user.
apt install sudo
adduser {USER}
usermod -aG sudo {USER}

See SSH Configuration to generate a public key for the new user and add to /home/{USER}/.ssh/authorized_keys.


Start an SSH connection to prevent lockout while configuring.

Force sshd to use public key only (only explicitly enabled lines are shown).

0644 root root /etc/ssh/sshd_config
LoginGraceTime 2m
PermitRootLogin prohibit-password
StrictModes yes
MaxAuthTries 3
MaxSessions 10

PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem       sftp    /usr/lib/openssh/sftp-server
service sshd restart


Confirm that SSH publickey login works with new user before continuing.


Enable fail2ban

Enable automatic banning for SSH and Web GUI login failures.

apt install fail2ban

Add proxmox WebUI filter.

0644 root root /etc/fail2ban/filter.d/proxmox.conf
# Fail2Ban configuration file
# Author: Cyril Jaquier
# $Revision: 569 $


# Option:  failregex
# Notes.:  regex to match the password failure messages in the logfile. The
#          host must be matched by a group named "host". The tag "<HOST>" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P<host>\S+)
# Values:  TEXT

failregex = pvedaemon\[.*authentication failure; rhost=<HOST> user=.* msg=.*

# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
ignoreregex =

Enable SSH & WebUI banning.

0644 root root /etc/fail2ban/jail.d/proxmox.conf
enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log

enabled = true
port    = https,http,8006
filter  = proxmox
logpath = /var/log/daemon.log

Restart service and verify jails are started.

service fail2ban restart
cat /var/log/fail2ban.log


Add Wireguard Kernel Support

This is only needed if LXC containers or promox will use wireguard. VM’s can use wireguard without it being enabled in the proxmox kernel.

As of proxmox 7, wireguard backports are no longer needed (running kernel 5.11).

Update and install wireguard.
apt update && apt install pve-headers
apt install wireguard wireguard-tools wireguard-dkms
modprobe wireguard

Enabled wireguard on boot.

0644 root root /etc/modules-load.d/modules.conf


Enable Hardware Virtualization (IOMMU)

0644 root root /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"


  • AMD: IOMMU & SVM enabled in BIOS. Use amd_iommu for grub.

  • Intel: IOMMU & VT-d enabled in BIOS. Use intel_iommu for grub.

Enable hardware virtualization kernel modules on boot.

0644 root root /etc/modules-load.d/modules.conf
Update boot image with IOMMU changes.

Setup Networking

Both management and LCX/VM adaptors should be used through bridges and not the physical adaptor directly. This allows for hardware changes and updates with minimal reconfiguration & failure.

In proxmox 7, you may need to install ifupdown2 if container networking is not working.


Add to Datacenter Cluster

Servers can be added to a cluster to share configuration and migration of LXC/VM’s. Any number of servers can be added; HA is only available after 3 servers are in the cluster.


Server must be added to an existing cluster before adding LXC/VM’s, otherwise they will be deleted when VM info is sync’ed from the first cluster server. This is done to prevent duplicate LXC/VM ID’s which will cause migration and management issues.

If an existing proxmox server has LCX/VM’s, the cluster should be created on that machine, and subsequent servers added afterwards.

Be sure that server IP and hostnames are in the correct state.


This can be done even after restricting SSH. Copy the join info and connect with the root password for the first proxmox install. It may appear to fail, but this is due to the services being reloaded. Just reload the site (either server) and they should appear connected.



Restrict hypervisor access to cluster and specific management clients. See Add to Datacenter Cluster to setup clustering before this step if using multiple servers.




Datacenter Firewall

Datacenter firewall defines rules that can be applied to all systems in the cluster.


Open a SSH connection to the server before enabling firewall in case of lockout. Disable active firewall with pve-firewall stop if access breaks. Remember to re-enable this.

LXC/VM bridged traffic is unaffected unless per LXC/VM firewalls are enabled.

Enable the security group & add drop rule.


Add unchecked (not enabled) and move to bottom of rule list.

Enable firewall & drop policy.


Set input policy before enabling firewall, or you will drop all traffic.

Cluster Firewall

Set Datacenter Firewall first to load global pve security group. Configure for each specific server in the cluster.


Add unchecked (not enabled) and move to bottom of rule list.

Enable firewall & drop policy.

Remove Subscription Notice

This will prompt on every login.

Disable via Service

Preferred method – will survive updates and reboots without modifying any PVE files. Download the latest release.

Install service and disable subscription key checking.
dpkg -i pve-fake-subscription_*.deb
echo '' | sudo tee -a /etc/hosts


Disable via Javascript

Not preferred – will not survive updates or upgrades and modifies PVE files.

Disable subscription notice.
sed -Ezi.bak "s/(\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service


This will disconnect you if executing through the promox web UI. Clear browser cache (shift + reload) and reconnect to download new javascript.


Mount External ZFS Pool

ZFS utils are already installed. ZFS can be directly imported on the cluster and will automatically appear in the WebUI afterwards.

zpool import {POOLNAME}

Add ISOs

ISOs may be uploaded via the GUI datacenter › {SERVER} › local › iso images › upload or directly to /var/lib/vz/template/iso/ if large.

Add Container Templates

Templates are updated via the GUI datacenter › {SERVER} › local › ct templates or command line.

pveam update
pveam available
pveam download {STORAGE} {NAME}