Proxmox Backup server
Backsup are primarly done through proxmox backup server taking snapshot of running lxc and vm's. These are stored on mirrored ZFS array and synchronized to both off-site location and cloud storage provider.
Backup Server configuration
Backup server is setup with:
- zfs storage
- access control - api tokens
- datastore
- sync jobs
- prune jobs
- verify jobs
- permissions
- timings and simulator
ZFS storage array
There are currently 2 x 8TB WD drives. Current pool status:
pool: pergamum
state: ONLINE
scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
config:
NAME STATE READ WRITE CKSUM
pergamum ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
errors: No known data errors
Creating and expanding zfs pool
zfs set mountpoint=/mnt/pergamum pergamum
(zpool import -c /etc/zfs/zpool.cache -aN)
zpool export pergamum
have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
> NOTE! `-n` is dry run, remove to commit.
Access Control
Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
API Token:
- user: root@pam
- token name: CLIENT_NAME
- expire: never
- enabled: true
Permissions - Add a API Token Permission:
- path: /datastore/proxmox-backup/CLIENT_NAME
- api token: root@pam!CLIENT_NAME
- role: DatastoreBackup
- propagate: true
Note! The path will not be define until after the Datastore namespace is define in the steps below
Proxmox datastore
If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
Namespace
Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
Prune & Garbage collect
We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: https://pbs.proxmox.com/docs/prune-simulator/. The current configuration is:
- datastore: proxmox-backup
- namespace: root
- keep last: 4
- keep: hourly: -
- keep daily: 6
- keep weekly: 3
- keep monthly: 6
- keep yearly: 4
- max_depth: full
- prune schedule: 0/6:00
- enabled: true
Verify jobs
Current configuration is:
- local datastore: proxmox-backup
- namespace: root
- max-depth: full
- schedule: daily
- skip verified: true
- re-verify after: 30 days
Permissions
Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
- path: /datastore/proxmox-backup/CLIENT_NAME
- API Token: root@pam!CLIENT_NAME
- Role: DatastoreBackup
- Propagate: true
Tailscale
Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
sudo apt-get update
sudo apt-get install tailscale
systemctl status tailscaled.service
sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
tailscale status
Connect to headscale login server:
To authenticate, visit:
https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
Success.
View more info in the docs: https://earvingad.github.io/posts/headscale/
Jottacloud client
Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
NOTE! This setup still uses user `kevin` and not the correct jottad user.
sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
sudo apt-get update
sudo apt-get install jotta-cli
# configure runtime environment
sudo useradd -m jottad
sudo usermod -a -G jottad backup
Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
2
3
4
5
6
7
8
9
10
11
12
13
14
Description=Jotta client daemon
[Service]
Type=notify
# Group=backup
# UMask=0002
# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env
ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/
Restart=on-failure
[Install]
WantedBy=default.target
Flaws
Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction.
It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
Client Configuration
Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
- selection mode: include selected VMs
- send email to: [email protected]
- email: on failure only
- mode: snapshot
- enabled: true
- job comment: {{guestname}}, {{node}}, {{vmid}}