Wiki source code of Proxmox Backup server
Version 20.1 by Kevin Wiki on 2024/04/06 14:02
Show last authors
| author | version | line-number | content |
|---|---|---|---|
| 1 | (% class="row" %) | ||
| 2 | ((( | ||
| 3 | (% class="col-xs-12 col-sm-8" %) | ||
| 4 | ((( | ||
| 5 | = Backup Server configuration = | ||
| 6 | |||
| 7 | Backup server is setup with: | ||
| 8 | |||
| 9 | * zfs storage | ||
| 10 | * access control - api tokens | ||
| 11 | * datastore | ||
| 12 | ** sync jobs | ||
| 13 | ** prune jobs | ||
| 14 | ** verify jobs | ||
| 15 | ** permissions | ||
| 16 | * timings and simulator | ||
| 17 | |||
| 18 | == ZFS storage array == | ||
| 19 | |||
| 20 | There are currently 2 x 8TB WD drives. Current pool status: | ||
| 21 | |||
| 22 | ((( | ||
| 23 | {{code language="none"}} | ||
| 24 | kevin@clio:~$ sudo zpool status pergamum | ||
| 25 | pool: pergamum | ||
| 26 | state: ONLINE | ||
| 27 | scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 | ||
| 28 | config: | ||
| 29 | NAME STATE READ WRITE CKSUM | ||
| 30 | pergamum ONLINE 0 0 0 | ||
| 31 | raidz1-0 ONLINE 0 0 0 | ||
| 32 | scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 | ||
| 33 | sdc1 ONLINE 0 0 0 | ||
| 34 | errors: No known data errors | ||
| 35 | {{/code}} | ||
| 36 | ))) | ||
| 37 | |||
| 38 | |||
| 39 | === Creating and expanding zfs pool === | ||
| 40 | |||
| 41 | ((( | ||
| 42 | {{code language="none"}} | ||
| 43 | zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f | ||
| 44 | zfs set mountpoint=/mnt/pergamum pergamum | ||
| 45 | (zpool import -c /etc/zfs/zpool.cache -aN) | ||
| 46 | zpool export pergamum | ||
| 47 | {{/code}} | ||
| 48 | ))) | ||
| 49 | |||
| 50 | |||
| 51 | ((( | ||
| 52 | have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: | ||
| 53 | |||
| 54 | {{code language="none"}} | ||
| 55 | zpool add -n pergamum raidz DISK1 DISK2 | ||
| 56 | {{/code}} | ||
| 57 | |||
| 58 | |||
| 59 | ~> NOTE! `-n` is dry run, remove to commit. | ||
| 60 | ))) | ||
| 61 | |||
| 62 | |||
| 63 | == Access Control == | ||
| 64 | |||
| 65 | Each client host that wants to backup their contents to the backup server should have their unique API token for authentication. | ||
| 66 | |||
| 67 | API Token: | ||
| 68 | |||
| 69 | * user: [[root@pam>>mailto:root@pam]] | ||
| 70 | * token name: CLIENT_NAME | ||
| 71 | * expire: never | ||
| 72 | * enabled: true | ||
| 73 | |||
| 74 | Permissions - Add a API Token Permission: | ||
| 75 | |||
| 76 | * path: /datastore/proxmox-backup/CLIENT_NAME | ||
| 77 | * api token: root@pam!CLIENT_NAME | ||
| 78 | * role: DatastoreBackup | ||
| 79 | * propagate: true | ||
| 80 | |||
| 81 | >Note! The path will not be define until after the Datastore namespace is define in the steps below | ||
| 82 | |||
| 83 | == Proxmox datastore == | ||
| 84 | |||
| 85 | If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here. | ||
| 86 | |||
| 87 | === Namespace === | ||
| 88 | |||
| 89 | Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above. | ||
| 90 | |||
| 91 | === Prune & Garbage collect === | ||
| 92 | |||
| 93 | We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is: | ||
| 94 | |||
| 95 | * datastore: proxmox-backup | ||
| 96 | * namespace: root | ||
| 97 | * keep last: 4 | ||
| 98 | * keep: hourly: - | ||
| 99 | * keep daily: 6 | ||
| 100 | * keep weekly: 3 | ||
| 101 | * keep monthly: 6 | ||
| 102 | * keep yearly: 4 | ||
| 103 | * max_depth: full | ||
| 104 | * prune schedule: 0/6:00 | ||
| 105 | * enabled: true | ||
| 106 | |||
| 107 | === Verify jobs === | ||
| 108 | |||
| 109 | Current configuration is: | ||
| 110 | |||
| 111 | * local datastore: proxmox-backup | ||
| 112 | * namespace: root | ||
| 113 | * max-depth: full | ||
| 114 | * schedule: daily | ||
| 115 | * skip verified: true | ||
| 116 | * re-verify after: 30 days | ||
| 117 | |||
| 118 | === Permissions === | ||
| 119 | |||
| 120 | Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission: | ||
| 121 | |||
| 122 | * path: /datastore/proxmox-backup/CLIENT_NAME | ||
| 123 | * API Token: root@pam!CLIENT_NAME | ||
| 124 | * Role: DatastoreBackup | ||
| 125 | * Propagate: true | ||
| 126 | |||
| 127 | = Tailscale = | ||
| 128 | |||
| 129 | Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. | ||
| 130 | |||
| 131 | Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. | ||
| 132 | To view the status: `sudo tailscale status`. | ||
| 133 | |||
| 134 | {{code language="bash"}} | ||
| 135 | curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null | ||
| 136 | curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list | ||
| 137 | |||
| 138 | sudo apt-get update | ||
| 139 | sudo apt-get install tailscale | ||
| 140 | |||
| 141 | systemctl status tailscaled.service | ||
| 142 | sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud --authkey AUTHKEY | ||
| 143 | tailscale status | ||
| 144 | {{/code}} | ||
| 145 | |||
| 146 | = Jottacloud client = | ||
| 147 | |||
| 148 | Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. | ||
| 149 | NOTE! This setup still uses user `kevin` and not the correct jottad user. | ||
| 150 | |||
| 151 | ((( | ||
| 152 | {{code language="none"}} | ||
| 153 | # install jotta-cli | ||
| 154 | sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg | ||
| 155 | echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list | ||
| 156 | sudo apt-get update | ||
| 157 | sudo apt-get install jotta-cli | ||
| 158 | |||
| 159 | # configure runtime environment | ||
| 160 | sudo useradd -m jottad | ||
| 161 | sudo usermod -a -G jottad backup | ||
| 162 | {{/code}} | ||
| 163 | ))) | ||
| 164 | |||
| 165 | Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : | ||
| 166 | |||
| 167 | ((( | ||
| 168 | |||
| 169 | |||
| 170 | {{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}} | ||
| 171 | [Unit] | ||
| 172 | Description=Jotta client daemon | ||
| 173 | |||
| 174 | [Service] | ||
| 175 | Type=notify | ||
| 176 | # Group=backup | ||
| 177 | # UMask=0002 | ||
| 178 | |||
| 179 | # EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env | ||
| 180 | ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/ | ||
| 181 | Restart=on-failure | ||
| 182 | |||
| 183 | [Install] | ||
| 184 | WantedBy=default.target | ||
| 185 | {{/code}} | ||
| 186 | ))) | ||
| 187 | |||
| 188 | == Flaws == | ||
| 189 | |||
| 190 | Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction. | ||
| 191 | |||
| 192 | It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. | ||
| 193 | |||
| 194 | = Client Configuration = | ||
| 195 | |||
| 196 | Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: | ||
| 197 | |||
| 198 | * selection mode: include selected VMs | ||
| 199 | * send email to: [[[email protected]>>mailto:[email protected]]] | ||
| 200 | * email: on failure only | ||
| 201 | * mode: snapshot | ||
| 202 | * enabled: true | ||
| 203 | * job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}} | ||
| 204 | ))) | ||
| 205 | |||
| 206 | |||
| 207 | (% class="col-xs-12 col-sm-4" %) | ||
| 208 | ((( | ||
| 209 | {{box title="**Contents**"}} | ||
| 210 | {{toc/}} | ||
| 211 | {{/box}} | ||
| 212 | ))) | ||
| 213 | ))) |