Prxomox Install
Prepare a USB drive with the latest ISO image from the Proxmox repository.
Ports
Proxmox Ports
Port
Protocol
Type
Purpose
22
TCP
RESTRICTED
SSH (cluster, management only)
85
TCP
RESTRICTED
REST API Pvedaemon onn127.0.0.1:85 (cluster only)
111
TCP/UDP
PUBLIC
Rpcbind (NFS services)
3128
TCP
PUBLIC
Spice proxy (client remote viewer)
5404-5405
UDP
RESTRICTED
Corosync cluster trafficn(cluster only)
5900-5999
TCP
PUBLIC
VNC Web console websockets
8006
TCP
RESTRICTED
Web Interface (management only)
60000-60050
TCP
RESTRICTED
Live Migrations (cluster only)
Updated: 2021-01-20 Reference
Install Service
Select
Install Proxmox VE
Accept EULA
Select installation drive with default options
Set country, timezone, and keyboard layout
Enter
root
password & validemail
addressSet network interface. Use static settings and FQDN hostname
Start installation
Important
The correct FQDN must be used as changing the hostname afterwards is error prone and not recommended for clustering.
Access the default webface for proxmox at https://{HOST}:8006
.
Configuration
The default installation is open, insecure, and must be configured. Configuration management is not used to minimize attack surface and resource consumption. Reboot server after steps are completed to ensure changes are applied.
Enable Automatic & Non-subscription Updates
Only changed or added lines are shown for files in this section.
apt install vim unattended-upgrades
Add the non-subscription repository.
deb http://download.proxmox.com/debian/pve {DEBIAN CODENAME} pve-no-subscription
Remove the subscription repository.
#deb https://enterprise.proxmox.com/debian/pve {DEBIAN CODENAME} pve-enterprise
Enable automatic updates.
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
Enable unattended upgrades (only changed lines shown). Proxmox servers should be rebooted at different times.
Unattended-Upgrade::Origins-Pattern {
"origin=Debian,codename=${distro_codename}-updates";
...
}
Unattended-Upgrade::Mail "root";
Unattended-Upgrade::MailOnlyOnError "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
Acquire::http::Dl-Limit "0";
unattended-upgrade -d
apt update && apt upgrade && apt dist-upgrade
Add Local User, Sudo, & Secure SSH
Proxmox requires root
SSH for cluster communications. This uses public key
authentication, so disable password authentication. Add a local user for primary
login and sudo
configuration use.
apt install sudo
adduser {USER}
usermod -aG sudo {USER}
See SSH Configuration to generate a public key for the new user
and add to /home/{USER}/.ssh/authorized_keys
.
Important
Start an SSH connection to prevent lockout while configuring.
Force sshd
to use public key only (only explicitly enabled lines are shown).
LoginGraceTime 2m
PermitRootLogin prohibit-password
StrictModes yes
MaxAuthTries 3
MaxSessions 10
PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
service sshd restart
Note
Confirm that SSH publickey login works with new user before continuing.
Enable fail2ban
Enable automatic banning for SSH and Web GUI login failures.
apt install fail2ban
Add proxmox WebUI filter.
# Fail2Ban configuration file
#
# Author: Cyril Jaquier
#
# $Revision: 569 $
#
[Definition]
# Option: failregex
# Notes.: regex to match the password failure messages in the logfile. The
# host must be matched by a group named "host". The tag "<HOST>" can
# be used for standard IP/hostname matching and is only an alias for
# (?:::f{4,6}:)?(?P<host>\S+)
# Values: TEXT
#
failregex = pvedaemon\[.*authentication failure; rhost=<HOST> user=.* msg=.*
# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =
Enable SSH & WebUI banning.
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
[proxmox]
enabled = true
port = https,http,8006
filter = proxmox
logpath = /var/log/daemon.log
Restart service and verify jails are started.
service fail2ban restart
cat /var/log/fail2ban.log
Add Wireguard Kernel Support
This is only needed if LXC containers
or promox
will use wireguard.
VM’s can use wireguard without it being enabled in the proxmox kernel.
As of proxmox 7, wireguard backports are no longer needed (running kernel
5.11
).
apt update && apt install pve-headers
apt install wireguard wireguard-tools wireguard-dkms
modprobe wireguard
Enabled wireguard on boot.
wireguard
Enable Hardware Virtualization (IOMMU)
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
Note
AMD:
IOMMU
&SVM
enabled in BIOS. Useamd_iommu
for grub.Intel:
IOMMU
&VT-d
enabled in BIOS. Useintel_iommu
for grub.
Enable hardware virtualization kernel modules on boot.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
update-grub
reboot
Setup Networking
Both management and LCX/VM adaptors should be used through bridges
and not
the physical adaptor directly. This allows for hardware changes and updates with
minimal reconfiguration & failure.
In proxmox 7, you may need to install ifupdown2
if container networking is
not working.
Create management interface
vmbr0
is used as the management interface. Typical default adaptor is
eno1
. The UI will show available adaptors. Server address should be on the
management
VLAN.
Note
If there only a single adaptor in the system this is all that is needed;
LXC/VM’s will use vmbr0
as a bridge (not recommended).
datacenter › {SERVER} › system › network › create › bridge
Name
vmbr0
IPv4
IP / CIDR
Gateway
GATEWAY
Autostart
☑
VLAN Aware
☑
Bridge ports
{ADAPTOR}
Updated: None
Create bonded interface
bond0
is the bonded device the bridge will use. No IP should be set.
Adaptors are shown in management interface and should be separated by a space.
Note
This assumes 802.3ad
has been enabled on the switch.
Create 802.3ad link aggregation.
Unifi requires 802.3ad
ports to be next to each other. 3-4
used as
example. Apply Profile Override to enable.
unifi › devices › device › port › edit › profile overrides - › operation › aggregate
aggregate ports
3-4
Updated: None
datacenter › {SERVER} › system › network › create › bond
Name
bond0
Autostart
☑
Slaves
{ADAPTOR 1} {ADAPTOR 2}
Mode
LACP (802.3ad)
Hash policy
layer2+3
Updated: None
Create bonded, bridged interface for LXC/VM’s
vmbr1
is the bridge device used by LXC/VM’s. No IP should be set.
datacenter › {SERVER} › system › network › create › bridge
Name
vmbr1
Autostart
☑
VLAN Aware
☑
Bridge ports
bond0
Updated: None
Setup Proxmox DNS Servers
datacenter › {SERVER} › system › dns
DNS Server 1
INTERNAL DNS
DNS Server 2
1.1.1.1
DNS Server 3
1.0.0.1
Updated: None
Add to Datacenter Cluster
Servers can be added to a cluster to share configuration and migration of
LXC/VM’s. Any number of servers can be added; HA is only available after 3
servers are in the cluster.
Important
Server must be added to an existing cluster before adding LXC/VM’s, otherwise they will be deleted when VM info is sync’ed from the first cluster server. This is done to prevent duplicate LXC/VM ID’s which will cause migration and management issues.
If an existing proxmox server has LCX/VM’s, the cluster should be created on that machine, and subsequent servers added afterwards.
Be sure that server IP and hostnames are in the correct state.
Note
This can be done even after restricting SSH. Copy the join info and connect with the root password for the first proxmox install. It may appear to fail, but this is due to the services being reloaded. Just reload the site (either server) and they should appear connected.
Create a new Cluster
datacenter › cluster › join information › copy information
datacenter › cluster › create cluster
Cluster Name
{NAME}
Cluster Network Link
0
Cluster Network IP
IP / CIDR
Updated: None
Add second server to cluster
datacenter › cluster › join cluster
Information
{PASTE JOIN INFORMATION}
Password
PASS
Updated: None
Firewall
Restrict hypervisor access to cluster and specific management clients. See Add to Datacenter Cluster to setup clustering before this step if using multiple servers.
Datacenter Firewall
Datacenter firewall defines rules that can be applied to all systems in the cluster.
Important
Open a SSH connection to the server before enabling firewall in case of
lockout. Disable active firewall with pve-firewall stop
if access breaks.
Remember to re-enable this.
LXC/VM bridged traffic is unaffected unless per LXC/VM firewalls are enabled.
Create cluster IP set for firewall
datacenter › firewall › ipset › create
IPSet
Cluster
Comment
pve servers
Updated: None
Add cluster IPs to cluster IP set
datacenter › firewall › ipset › Cluster › add
IP/CIDR
{PVE SERVER 1}
IP/CIDR
{PVE SERVER 2}
Updated: None
Create management IP sets for firewall
datacenter › firewall › ipset › create
IPSet
Management
Comment
pve remote access
Updated: None
Add cluster IPs to management IP set
datacenter › firewall › ipset › Management › add
IP/CIDR
{REMOTE CLIENT IP 1}
IP/CIDR
{REMOTE CLIENT IP 2}
Updated: None
Create a proxmox Security Group for services
datacenter › firewall › security group › create
Name
pve
Comment
pve hypervisor firewall
Updated: None
Live Migration Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+cluster
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
60000:60050
Comment
Live Migrations
Log level
nolog
Updated: None
Corosync cluster traffic Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+cluster
Destination
Enable
☑
Macro
Protocol
UDP
Source port
Dest. port
5404:5405
Comment
Corosync cluster traffic
Log level
nolog
Updated: None
Web Interface Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+management
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
8006
Comment
Web Interface
Log level
nolog
Updated: None
VNC Web Console Websockets Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
5900:5999
Comment
VNC Web console websockets
Log level
nolog
Updated: None
Pvedaemon Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+cluster
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
85
Comment
pvedaemon (listens 127.0.0.1:85) REST API
Log level
nolog
Updated: None
SSH (Cluster traffic) Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+cluster
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
22
Comment
SSH (cluster traffic)
Log level
nolog
Updated: None
SSH (Management traffic) Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
+management
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
22
Comment
SSH (management traffic)
Log level
nolog
Updated: None
Rpcbind (NFS services TCP) Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
111
Comment
rpcbind (NFS services)
Log level
nolog
Updated: None
Rpcbind (NFS services UDP) Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
Destination
Enable
☑
Macro
Protocol
UDP
Source port
Dest. port
111
Comment
rpcbind (NFS services)
Log level
nolog
Updated: None
Spice proxy Rule
datacenter › firewall › security group › pve › add
Direction
IN
Action
ACCEPT
Source
Destination
Enable
☑
Macro
Protocol
TCP
Source port
Dest. port
3128
Comment
spice proxy (client remote viewer)
Log level
nolog
Updated: None
Enable the security group & add drop rule.
Enable the security group
datacenter › firewall › insert: security group › pve
Security Group
pve
Interface
Enable
☑
Updated: None
Add DROP Rule (disabled)
datacenter › firewall › add
Direction
IN
Action
DROP
Interface
Source
Destination
Enable
☐
Macro
Protocol
Source port
Dest. port
Comment
Drop all other traffic
Log level
nolog
Updated: None
Note
Add unchecked (not enabled) and move to bottom of rule list.
Enable firewall & drop policy.
Enable firewall
datacenter › firewall › options
Input Policy
ACCEPT
Firewall
YES
Updated: None
Warning
Set input policy before enabling firewall, or you will drop all traffic.
Enable DROP policy Rule
datacenter › firewall › Drop all other traffic
Enable
☑
Updated: None
Cluster Firewall
Set Datacenter Firewall first to load global pve
security group.
Configure for each specific server in the cluster.
Enabled the security group on cluster
datacenter › {SERVER} › firewall › insert: security group › pve
Security Group
pve
Interface
Enable
☑
Updated: None
Add DROP Rule (disabled)
datacenter › {SERVER} › firewall › add
Direction
IN
Action
DROP
Interface
Source
Destination
Enable
☐
Macro
Protocol
Source port
Dest. port
Comment
Drop all other traffic
Log level
nolog
Updated: None
Note
Add unchecked (not enabled) and move to bottom of rule list.
Enable firewall & drop policy.
Enable firewall
datacenter › {SERVER} › firewall › options
Firewall
YES
Updated: None
Enable DROP policy Rule
datacenter › firewall › Drop all other traffic
Enable
☑
Updated: None
Remove Subscription Notice
This will prompt on every login.
Disable via Service
Preferred method – will survive updates and reboots without modifying any PVE files. Download the latest release.
dpkg -i pve-fake-subscription_*.deb
echo '127.0.0.1 shop.maurer-it.com' | sudo tee -a /etc/hosts
Disable via Javascript
Not preferred – will not survive updates or upgrades and modifies PVE files.
sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service
Note
This will disconnect you if executing through the promox web UI. Clear browser cache (shift + reload) and reconnect to download new javascript.
Mount External ZFS Pool
ZFS utils are already installed. ZFS can be directly imported on the cluster and will automatically appear in the WebUI afterwards.
zpool import {POOLNAME}
Add ISOs
ISOs may be uploaded via the GUI
datacenter › {SERVER} › local › iso images › upload or
directly to /var/lib/vz/template/iso/
if large.
Add Container Templates
Templates are updated via the GUI datacenter › {SERVER} › local › ct templates or command line.
pveam update
pveam available
pveam download {STORAGE} {NAME}