Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 29.1
edited by Kevin Wiki
on 2024/05/21 21:23
Change comment: There is no comment for this version
To version 18.1
edited by Kevin Wiki
on 2024/04/06 12:48
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,12 +2,6 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 -(% class="wikigeneratedid" %)
6 -Following provides setup steps, configuration explanation and application instructions for backup server. This box both generates backups and syncs them to remote locations. View general backup explanation page [[Server backup>>doc:infra.Backup.WebHome]] for high-level information.
7 -
8 -(% class="wikigeneratedid" %)
9 -Web GUI: [[https:~~/~~/clio.schleppe:8007/#pbsDashboard>>url:https://clio.schleppe:8007/#pbsDashboard]]
10 -
11 11  = Backup Server configuration =
12 12  
13 13  Backup server is setup with:
... ... @@ -25,45 +25,36 @@
25 25  
26 26  There are currently 2 x 8TB WD drives. Current pool status:
27 27  
28 -(((
29 -{{code language="none"}}
30 -kevin@clio:~$ sudo zpool status pergamum
31 -pool: pergamum
32 -state: ONLINE
33 -  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
22 +{{{kevin@clio:~$ sudo zpool status pergamum
23 + pool: pergamum
24 + state: ONLINE
25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
34 34  config:
35 -        NAME                                            STATE     READ WRITE CKSUM
36 -        pergamum                                        ONLINE       0     0     0
37 -          raidz1-0                                      ONLINE       0     0     0
38 -            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
39 -            sdc1                                        ONLINE       0     0     0
40 -errors: No known data errors
41 -{{/code}}
42 -)))
43 43  
28 + NAME STATE READ WRITE CKSUM
29 + pergamum ONLINE 0 0 0
30 + raidz1-0 ONLINE 0 0 0
31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 + sdc1 ONLINE 0 0 0
44 44  
34 +errors: No known data errors}}}
35 +
36 +
45 45  === Creating and expanding zfs pool ===
46 46  
47 -(((
48 -{{code language="none"}}
39 +```
49 49  zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
50 50  zfs set mountpoint=/mnt/pergamum pergamum
51 51  (zpool import -c /etc/zfs/zpool.cache -aN)
52 52  zpool export pergamum
53 -{{/code}}
54 -)))
44 +```
55 55  
56 56  
57 -(((
58 58  have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
59 -
60 -{{code language="none"}}
48 +```
61 61  zpool add -n pergamum raidz DISK1 DISK2
62 -{{/code}}
63 -
64 -
50 +```
65 65  ~> NOTE! `-n` is dry run, remove to commit.
66 -)))
67 67  
68 68  
69 69  == Access Control ==
... ... @@ -134,40 +134,16 @@
134 134  
135 135  Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
136 136  
137 -{{code language="bash"}}
138 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
139 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
122 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
123 +To view the status: `sudo tailscale status`.
140 140  
141 -sudo apt-get update
142 -sudo apt-get install tailscale
143 -
144 -systemctl status tailscaled.service
145 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
146 -tailscale status
147 -{{/code}}
148 -
149 -Connect to headscale login server:
150 -
151 -{{code language="none"}}
152 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
153 -
154 -To authenticate, visit:
155 -
156 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
157 -
158 -Success.
159 -{{/code}}
160 -
161 -View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
162 -
163 163  = Jottacloud client =
164 164  
165 165  Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
166 166  NOTE! This setup still uses user `kevin` and not the correct jottad user.
167 167  
168 -(((
169 -{{code language="none"}}
170 -# install jotta-cli
130 +
131 +{{{# install jotta-cli
171 171  sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
172 172  echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
173 173  sudo apt-get update
... ... @@ -175,10 +175,9 @@
175 175  
176 176  # configure runtime environment
177 177  sudo useradd -m jottad
178 -sudo usermod -a -G jottad backup
179 -{{/code}}
180 -)))
139 +sudo usermod -a -G jottad backup}}}
181 181  
141 +
182 182  Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
183 183  
184 184  (((
... ... @@ -208,53 +208,16 @@
208 208  
209 209  It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
210 210  
211 -= Syncthing =
212 -
213 -TODO
214 -
215 215  = Client Configuration =
216 216  
217 217  Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
218 218  
219 219  * selection mode: include selected VMs
220 -* send email to: EMAIL_ADDRESS
176 +* send email to: [[kevin.midboe+PVE_HOSTNAME@gmail.com>>mailto:kevin.midboe+PVE_HOS[email protected]]]
221 221  * email: on failure only
222 222  * mode: snapshot
223 223  * enabled: true
224 224  * job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}}
225 -
226 -= Debugging/issues live here =
227 -
228 -== Permission denied anything for certain backups ==
229 -
230 -When trying to restore a VM I noticed that it was very outdated. Before doing anything I got a `Permission denied (os error 13)` error message. I checked the permissions of the storage mount in proxmox cluster, generated new API key, removed and re-added the storage to node getting permission denied, and what gave it away I also got it when running the CLI command from proxmox-backup-server host.
231 -
232 -{{code language="bash"}}
233 -kevin@clio:~$ sudo proxmox-backup-client snapshot forget -ns apollo -repository proxmox-backup 'vm/201/2023-07-31T01:31:18Z'
234 -[sudo] password for kevin:
235 -Password for "root@pam": *****************
236 -fingerprint: **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**
237 -Are you sure you want to continue connecting? (y/n): y
238 -storing login ticket failed: $XDG_RUNTIME_DIR must be set
239 -Error: removing backup snapshot "/mnt/pergamum/proxmox-backup/ns/apollo/vm/201/2023-07-31T01:31:18Z" failed - Permission denied (os error 13)
240 -
241 -kevin@clio:~$ ls -l "/mnt/pergamum/proxmox-backup/ns/apollo/vm/201/2023-07-31T01:31:18Z"
242 -total 263
243 --rw-r--r-- 1 root root 667 Feb 17 01:16 client.log.blob
244 --rw-r--r-- 1 root root 167936 Feb 17 01:16 drive-scsi0.img.fidx
245 --rw-r--r-- 1 root root 539 Feb 17 01:16 index.json.blob
246 --rw-r--r-- 1 root root 342 Feb 17 01:16 qemu-server.conf.blob
247 -{{/code}}
248 -
249 -Aha! The owner of everything in these folders should be {{code language="none"}}backup:backup{{/code}}.
250 -
251 -**Resolve using:**
252 -
253 -{{code language="bash"}}
254 -kevin@clio:~$ sudo chown -R backup:backup /mnt/pergamum/proxmox-backup/ns/apollo/*
255 -{{/code}}
256 -
257 -
258 258  )))
259 259  
260 260