Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
25.1
edited by Kevin Wiki
on 2024/04/06 14:25
on 2024/04/06 14:25
Change comment:
There is no comment for this version
To version
13.1
edited by Kevin Wiki
on 2024/04/06 11:12
on 2024/04/06 11:12
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,14 +2,8 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 -(% class="wikigeneratedid" %) 6 -Following provides setup steps, configuration explanation and application instructions for backup server. This box both generates backups and syncs them to remote locations. View general backup explanation page [[Server backup>>doc:infra.Backup.WebHome]] for high-level information. 5 += Configuration = 7 7 8 -(% class="wikigeneratedid" %) 9 -Web GUI: [[https:~~/~~/clio.schleppe:8007/#pbsDashboard>>url:https://clio.schleppe:8007/#pbsDashboard]] 10 - 11 -= Backup Server configuration = 12 - 13 13 Backup server is setup with: 14 14 15 15 * zfs storage ... ... @@ -21,52 +21,43 @@ 21 21 ** permissions 22 22 * timings and simulator 23 23 24 -= =ZFS storage array ==18 += ZFS storage array = 25 25 26 26 There are currently 2 x 8TB WD drives. Current pool status: 27 27 28 -((( 29 -{{code language="none"}} 30 -kevin@clio:~$ sudo zpool status pergamum 31 -pool: pergamum 32 -state: ONLINE 33 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 22 +{{{kevin@clio:~$ sudo zpool status pergamum 23 + pool: pergamum 24 + state: ONLINE 25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 34 34 config: 35 - NAME STATE READ WRITE CKSUM 36 - pergamum ONLINE 0 0 0 37 - raidz1-0 ONLINE 0 0 0 38 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 39 - sdc1 ONLINE 0 0 0 40 -errors: No known data errors 41 -{{/code}} 42 -))) 43 43 28 + NAME STATE READ WRITE CKSUM 29 + pergamum ONLINE 0 0 0 30 + raidz1-0 ONLINE 0 0 0 31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 32 + sdc1 ONLINE 0 0 0 44 44 45 - === Creatingandexpandingzfspool ===34 +errors: No known data errors}}} 46 46 47 -((( 48 -{{code language="none"}} 36 + 37 +== Creating and expanding zfs pool == 38 + 39 +``` 49 49 zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 50 50 zfs set mountpoint=/mnt/pergamum pergamum 51 51 (zpool import -c /etc/zfs/zpool.cache -aN) 52 52 zpool export pergamum 53 -{{/code}} 54 -))) 44 +``` 55 55 56 56 57 -((( 58 58 have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 59 - 60 -{{code language="none"}} 48 +``` 61 61 zpool add -n pergamum raidz DISK1 DISK2 62 -{{/code}} 63 - 64 - 50 +``` 65 65 ~> NOTE! `-n` is dry run, remove to commit. 66 -))) 67 67 68 68 69 -= =Access Control ==54 += Access Control = 70 70 71 71 Each client host that wants to backup their contents to the backup server should have their unique API token for authentication. 72 72 ... ... @@ -86,15 +86,15 @@ 86 86 87 87 >Note! The path will not be define until after the Datastore namespace is define in the steps below 88 88 89 -= =Proxmox datastore ==74 += Proxmox datastore = 90 90 91 91 If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here. 92 92 93 -== =Namespace ===78 +== Namespace == 94 94 95 95 Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above. 96 96 97 -== =Prune & Garbage collect ===82 +== Prune & Garbage collect == 98 98 99 99 We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is: 100 100 ... ... @@ -110,7 +110,7 @@ 110 110 * prune schedule: 0/6:00 111 111 * enabled: true 112 112 113 -== =Verify jobs ===98 +== Verify jobs == 114 114 115 115 Current configuration is: 116 116 ... ... @@ -121,7 +121,7 @@ 121 121 * skip verified: true 122 122 * re-verify after: 30 days 123 123 124 -== =Permissions ===109 +== Permissions == 125 125 126 126 Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission: 127 127 ... ... @@ -134,93 +134,43 @@ 134 134 135 135 Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. 136 136 137 -{{code language="bash"}} 138 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null 139 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 122 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. 123 +To view the status: `sudo tailscale status`. 140 140 141 -sudo apt-get update 142 -sudo apt-get install tailscale 125 += Client Configuration = 143 143 144 -systemctl status tailscaled.service 145 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud 146 -tailscale status 147 -{{/code}} 127 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 128 +\\{{code language="none" width="100%"}}Selection mode: include selected VMs 129 +Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com 130 +Email: On failure only 131 +Mode: Snapshot 132 +Enabled: True 133 +Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}} 148 148 149 - Connecttoheadscale loginserver:135 += Methodology = 150 150 151 -{{code language="none"}} 152 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud 137 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 153 153 154 - Toauthenticate,visit:139 +== Sub-paragraph == 155 155 156 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f141 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 157 157 158 -Success. 159 -{{/code}} 143 += Proxmox backup server = 160 160 161 - Viewmoreinfo in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]145 += = 162 162 163 - =Jottacloud client=147 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 164 164 165 -Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 166 -NOTE! This setup still uses user `kevin` and not the correct jottad user. 149 +== Sub-paragraph == 167 167 168 -((( 169 -{{code language="none"}} 170 -# install jotta-cli 171 -sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 172 -echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 173 -sudo apt-get update 174 -sudo apt-get install jotta-cli 151 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 175 175 176 -# configure runtime environment 177 -sudo useradd -m jottad 178 -sudo usermod -a -G jottad backup 179 -{{/code}} 180 -))) 153 +== Sub-paragraph == 181 181 182 -Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 183 - 184 -((( 185 - 186 - 187 -{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}} 188 -[Unit] 189 -Description=Jotta client daemon 190 - 191 -[Service] 192 -Type=notify 193 -# Group=backup 194 -# UMask=0002 195 - 196 -# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env 197 -ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/ 198 -Restart=on-failure 199 - 200 -[Install] 201 -WantedBy=default.target 202 -{{/code}} 155 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 203 203 ))) 204 204 205 -== Flaws == 206 206 207 -Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction. 208 - 209 -It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. 210 - 211 -= Client Configuration = 212 - 213 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 214 - 215 -* selection mode: include selected VMs 216 -* send email to: [[[email protected]>>mailto:[email protected]]] 217 -* email: on failure only 218 -* mode: snapshot 219 -* enabled: true 220 -* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}} 221 -))) 222 - 223 - 224 224 (% class="col-xs-12 col-sm-4" %) 225 225 ((( 226 226 {{box title="**Contents**"}}