Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
25.1
edited by Kevin Wiki
on 2024/04/06 14:25
on 2024/04/06 14:25
Change comment:
There is no comment for this version
To version
18.1
edited by Kevin Wiki
on 2024/04/06 12:48
on 2024/04/06 12:48
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,12 +2,6 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 -(% class="wikigeneratedid" %) 6 -Following provides setup steps, configuration explanation and application instructions for backup server. This box both generates backups and syncs them to remote locations. View general backup explanation page [[Server backup>>doc:infra.Backup.WebHome]] for high-level information. 7 - 8 -(% class="wikigeneratedid" %) 9 -Web GUI: [[https:~~/~~/clio.schleppe:8007/#pbsDashboard>>url:https://clio.schleppe:8007/#pbsDashboard]] 10 - 11 11 = Backup Server configuration = 12 12 13 13 Backup server is setup with: ... ... @@ -25,45 +25,36 @@ 25 25 26 26 There are currently 2 x 8TB WD drives. Current pool status: 27 27 28 -((( 29 -{{code language="none"}} 30 -kevin@clio:~$ sudo zpool status pergamum 31 -pool: pergamum 32 -state: ONLINE 33 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 22 +{{{kevin@clio:~$ sudo zpool status pergamum 23 + pool: pergamum 24 + state: ONLINE 25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 34 34 config: 35 - NAME STATE READ WRITE CKSUM 36 - pergamum ONLINE 0 0 0 37 - raidz1-0 ONLINE 0 0 0 38 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 39 - sdc1 ONLINE 0 0 0 40 -errors: No known data errors 41 -{{/code}} 42 -))) 43 43 28 + NAME STATE READ WRITE CKSUM 29 + pergamum ONLINE 0 0 0 30 + raidz1-0 ONLINE 0 0 0 31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 32 + sdc1 ONLINE 0 0 0 44 44 34 +errors: No known data errors}}} 35 + 36 + 45 45 === Creating and expanding zfs pool === 46 46 47 -((( 48 -{{code language="none"}} 39 +``` 49 49 zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 50 50 zfs set mountpoint=/mnt/pergamum pergamum 51 51 (zpool import -c /etc/zfs/zpool.cache -aN) 52 52 zpool export pergamum 53 -{{/code}} 54 -))) 44 +``` 55 55 56 56 57 -((( 58 58 have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 59 - 60 -{{code language="none"}} 48 +``` 61 61 zpool add -n pergamum raidz DISK1 DISK2 62 -{{/code}} 63 - 64 - 50 +``` 65 65 ~> NOTE! `-n` is dry run, remove to commit. 66 -))) 67 67 68 68 69 69 == Access Control == ... ... @@ -134,40 +134,16 @@ 134 134 135 135 Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. 136 136 137 -{{code language="bash"}} 138 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null 139 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 122 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. 123 +To view the status: `sudo tailscale status`. 140 140 141 -sudo apt-get update 142 -sudo apt-get install tailscale 143 - 144 -systemctl status tailscaled.service 145 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud 146 -tailscale status 147 -{{/code}} 148 - 149 -Connect to headscale login server: 150 - 151 -{{code language="none"}} 152 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud 153 - 154 -To authenticate, visit: 155 - 156 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f 157 - 158 -Success. 159 -{{/code}} 160 - 161 -View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]] 162 - 163 163 = Jottacloud client = 164 164 165 165 Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 166 166 NOTE! This setup still uses user `kevin` and not the correct jottad user. 167 167 168 -((( 169 -{{code language="none"}} 170 -# install jotta-cli 130 + 131 +{{{# install jotta-cli 171 171 sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 172 172 echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 173 173 sudo apt-get update ... ... @@ -175,10 +175,9 @@ 175 175 176 176 # configure runtime environment 177 177 sudo useradd -m jottad 178 -sudo usermod -a -G jottad backup 179 -{{/code}} 180 -))) 139 +sudo usermod -a -G jottad backup}}} 181 181 141 + 182 182 Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 183 183 184 184 (((