Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
17.1
edited by Kevin Wiki
on 2024/04/06 12:48
on 2024/04/06 12:48
Change comment:
There is no comment for this version
To version
27.1
edited by Kevin Wiki
on 2024/04/06 20:07
on 2024/04/06 20:07
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,6 +2,12 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 +(% class="wikigeneratedid" %) 6 +Following provides setup steps, configuration explanation and application instructions for backup server. This box both generates backups and syncs them to remote locations. View general backup explanation page [[Server backup>>doc:infra.Backup.WebHome]] for high-level information. 7 + 8 +(% class="wikigeneratedid" %) 9 +Web GUI: [[https:~~/~~/clio.schleppe:8007/#pbsDashboard>>url:https://clio.schleppe:8007/#pbsDashboard]] 10 + 5 5 = Backup Server configuration = 6 6 7 7 Backup server is setup with: ... ... @@ -19,36 +19,45 @@ 19 19 20 20 There are currently 2 x 8TB WD drives. Current pool status: 21 21 22 -{{{kevin@clio:~$ sudo zpool status pergamum 23 - pool: pergamum 24 - state: ONLINE 25 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 28 +((( 29 +{{code language="none"}} 30 +kevin@clio:~$ sudo zpool status pergamum 31 +pool: pergamum 32 +state: ONLINE 33 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 26 26 config: 35 + NAME STATE READ WRITE CKSUM 36 + pergamum ONLINE 0 0 0 37 + raidz1-0 ONLINE 0 0 0 38 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 39 + sdc1 ONLINE 0 0 0 40 +errors: No known data errors 41 +{{/code}} 42 +))) 27 27 28 - NAME STATE READ WRITE CKSUM 29 - pergamum ONLINE 0 0 0 30 - raidz1-0 ONLINE 0 0 0 31 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 32 - sdc1 ONLINE 0 0 0 33 33 34 -errors: No known data errors}}} 35 - 36 - 37 37 === Creating and expanding zfs pool === 38 38 39 -``` 47 +((( 48 +{{code language="none"}} 40 40 zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 41 41 zfs set mountpoint=/mnt/pergamum pergamum 42 42 (zpool import -c /etc/zfs/zpool.cache -aN) 43 43 zpool export pergamum 44 -``` 53 +{{/code}} 54 +))) 45 45 46 46 57 +((( 47 47 have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 48 -``` 59 + 60 +{{code language="none"}} 49 49 zpool add -n pergamum raidz DISK1 DISK2 50 -``` 62 +{{/code}} 63 + 64 + 51 51 ~> NOTE! `-n` is dry run, remove to commit. 66 +))) 52 52 53 53 54 54 == Access Control == ... ... @@ -119,16 +119,40 @@ 119 119 120 120 Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. 121 121 122 -Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. 123 -To view the status: `sudo tailscale status`. 137 +{{code language="bash"}} 138 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null 139 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 124 124 141 +sudo apt-get update 142 +sudo apt-get install tailscale 143 + 144 +systemctl status tailscaled.service 145 +sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud 146 +tailscale status 147 +{{/code}} 148 + 149 +Connect to headscale login server: 150 + 151 +{{code language="none"}} 152 +$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud 153 + 154 +To authenticate, visit: 155 + 156 + https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f 157 + 158 +Success. 159 +{{/code}} 160 + 161 +View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]] 162 + 125 125 = Jottacloud client = 126 126 127 127 Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 128 128 NOTE! This setup still uses user `kevin` and not the correct jottad user. 129 129 130 - 131 -{{{# install jotta-cli 168 +((( 169 +{{code language="none"}} 170 +# install jotta-cli 132 132 sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 133 133 echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 134 134 sudo apt-get update ... ... @@ -136,15 +136,16 @@ 136 136 137 137 # configure runtime environment 138 138 sudo useradd -m jottad 139 -sudo usermod -a -G jottad backup}}} 178 +sudo usermod -a -G jottad backup 179 +{{/code}} 180 +))) 140 140 141 - 142 142 Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 143 143 144 144 ((( 145 - \\185 + 146 146 147 -{{code layout="LINENUMBERS" l anguage="ini"}}187 +{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}} 148 148 [Unit] 149 149 Description=Jotta client daemon 150 150 ... ... @@ -168,12 +168,16 @@ 168 168 169 169 It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. 170 170 211 += Syncthing = 212 + 213 +TODO 214 + 171 171 = Client Configuration = 172 172 173 173 Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 174 174 175 175 * selection mode: include selected VMs 176 -* send email to: [[kevin.midboe+PVE_HOSTNAM[email protected]>>mailto:kevin.midboe+PVE_HOSTNAME@gmail.com]]220 +* send email to: EMAIL_ADDRESS 177 177 * email: on failure only 178 178 * mode: snapshot 179 179 * enabled: true