Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 16.1
edited by Kevin Wiki
on 2024/04/06 11:37
Change comment: There is no comment for this version
To version 23.1
edited by Kevin Wiki
on 2024/04/06 14:16
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,6 +2,9 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 +(% class="wikigeneratedid" %)
6 +Backups are primarly done through proxmox backup server taking snapshot of running lxc and vm's. These are stored on mirrored ZFS array and synchronized to both off-site location and cloud storage provider.
7 +
5 5  = Backup Server configuration =
6 6  
7 7  Backup server is setup with:
... ... @@ -19,36 +19,45 @@
19 19  
20 20  There are currently 2 x 8TB WD drives. Current pool status:
21 21  
22 -{{{kevin@clio:~$ sudo zpool status pergamum
23 - pool: pergamum
24 - state: ONLINE
25 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
25 +(((
26 +{{code language="none"}}
27 +kevin@clio:~$ sudo zpool status pergamum
28 +pool: pergamum
29 +state: ONLINE
30 +  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
26 26  config:
32 +        NAME                                            STATE     READ WRITE CKSUM
33 +        pergamum                                        ONLINE       0     0     0
34 +          raidz1-0                                      ONLINE       0     0     0
35 +            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
36 +            sdc1                                        ONLINE       0     0     0
37 +errors: No known data errors
38 +{{/code}}
39 +)))
27 27  
28 - NAME STATE READ WRITE CKSUM
29 - pergamum ONLINE 0 0 0
30 - raidz1-0 ONLINE 0 0 0
31 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 - sdc1 ONLINE 0 0 0
33 33  
34 -errors: No known data errors}}}
35 -
36 -
37 37  === Creating and expanding zfs pool ===
38 38  
39 -```
44 +(((
45 +{{code language="none"}}
40 40  zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
41 41  zfs set mountpoint=/mnt/pergamum pergamum
42 42  (zpool import -c /etc/zfs/zpool.cache -aN)
43 43  zpool export pergamum
44 -```
50 +{{/code}}
51 +)))
45 45  
46 46  
54 +(((
47 47  have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
48 -```
56 +
57 +{{code language="none"}}
49 49  zpool add -n pergamum raidz DISK1 DISK2
50 -```
59 +{{/code}}
60 +
61 +
51 51  ~> NOTE! `-n` is dry run, remove to commit.
63 +)))
52 52  
53 53  
54 54  == Access Control ==
... ... @@ -119,16 +119,40 @@
119 119  
120 120  Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
121 121  
122 -Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
123 -To view the status: `sudo tailscale status`.
134 +{{code language="bash"}}
135 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
136 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
124 124  
138 +sudo apt-get update
139 +sudo apt-get install tailscale
140 +
141 +systemctl status tailscaled.service
142 +sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
143 +tailscale status
144 +{{/code}}
145 +
146 +Connect to headscale login server:
147 +
148 +{{code language="none"}}
149 +$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
150 +
151 +To authenticate, visit:
152 +
153 + https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
154 +
155 +Success.
156 +{{/code}}
157 +
158 +View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
159 +
125 125  = Jottacloud client =
126 126  
127 127  Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
128 128  NOTE! This setup still uses user `kevin` and not the correct jottad user.
129 129  
130 -
131 -{{{# install jotta-cli
165 +(((
166 +{{code language="none"}}
167 +# install jotta-cli
132 132  sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
133 133  echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
134 134  sudo apt-get update
... ... @@ -136,11 +136,17 @@
136 136  
137 137  # configure runtime environment
138 138  sudo useradd -m jottad
139 -sudo usermod -a -G jottad backup}}}
175 +sudo usermod -a -G jottad backup
176 +{{/code}}
177 +)))
140 140  
179 +Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
141 141  
142 -Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 
143 -{{code language="ini" title="/usr/lib/systemd/user/jottad.service"}}[Unit]
181 +(((
182 +
183 +
184 +{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
185 +[Unit]
144 144  Description=Jotta client daemon
145 145  
146 146  [Service]
... ... @@ -153,7 +153,9 @@
153 153  Restart=on-failure
154 154  
155 155  [Install]
156 -WantedBy=default.target{{/code}}
198 +WantedBy=default.target
199 +{{/code}}
200 +)))
157 157  
158 158  == Flaws ==
159 159