Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 23.1
edited by Kevin Wiki
on 2024/04/06 14:16
Change comment: There is no comment for this version
To version 16.1
edited by Kevin Wiki
on 2024/04/06 11:37
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,9 +2,6 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 -(% class="wikigeneratedid" %)
6 -Backups are primarly done through proxmox backup server taking snapshot of running lxc and vm's. These are stored on mirrored ZFS array and synchronized to both off-site location and cloud storage provider.
7 -
8 8  = Backup Server configuration =
9 9  
10 10  Backup server is setup with:
... ... @@ -22,45 +22,36 @@
22 22  
23 23  There are currently 2 x 8TB WD drives. Current pool status:
24 24  
25 -(((
26 -{{code language="none"}}
27 -kevin@clio:~$ sudo zpool status pergamum
28 -pool: pergamum
29 -state: ONLINE
30 -  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
22 +{{{kevin@clio:~$ sudo zpool status pergamum
23 + pool: pergamum
24 + state: ONLINE
25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
31 31  config:
32 -        NAME                                            STATE     READ WRITE CKSUM
33 -        pergamum                                        ONLINE       0     0     0
34 -          raidz1-0                                      ONLINE       0     0     0
35 -            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
36 -            sdc1                                        ONLINE       0     0     0
37 -errors: No known data errors
38 -{{/code}}
39 -)))
40 40  
28 + NAME STATE READ WRITE CKSUM
29 + pergamum ONLINE 0 0 0
30 + raidz1-0 ONLINE 0 0 0
31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 + sdc1 ONLINE 0 0 0
41 41  
34 +errors: No known data errors}}}
35 +
36 +
42 42  === Creating and expanding zfs pool ===
43 43  
44 -(((
45 -{{code language="none"}}
39 +```
46 46  zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
47 47  zfs set mountpoint=/mnt/pergamum pergamum
48 48  (zpool import -c /etc/zfs/zpool.cache -aN)
49 49  zpool export pergamum
50 -{{/code}}
51 -)))
44 +```
52 52  
53 53  
54 -(((
55 55  have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
56 -
57 -{{code language="none"}}
48 +```
58 58  zpool add -n pergamum raidz DISK1 DISK2
59 -{{/code}}
60 -
61 -
50 +```
62 62  ~> NOTE! `-n` is dry run, remove to commit.
63 -)))
64 64  
65 65  
66 66  == Access Control ==
... ... @@ -131,40 +131,16 @@
131 131  
132 132  Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
133 133  
134 -{{code language="bash"}}
135 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
136 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
122 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
123 +To view the status: `sudo tailscale status`.
137 137  
138 -sudo apt-get update
139 -sudo apt-get install tailscale
140 -
141 -systemctl status tailscaled.service
142 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
143 -tailscale status
144 -{{/code}}
145 -
146 -Connect to headscale login server:
147 -
148 -{{code language="none"}}
149 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
150 -
151 -To authenticate, visit:
152 -
153 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
154 -
155 -Success.
156 -{{/code}}
157 -
158 -View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
159 -
160 160  = Jottacloud client =
161 161  
162 162  Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
163 163  NOTE! This setup still uses user `kevin` and not the correct jottad user.
164 164  
165 -(((
166 -{{code language="none"}}
167 -# install jotta-cli
130 +
131 +{{{# install jotta-cli
168 168  sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
169 169  echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
170 170  sudo apt-get update
... ... @@ -172,17 +172,11 @@
172 172  
173 173  # configure runtime environment
174 174  sudo useradd -m jottad
175 -sudo usermod -a -G jottad backup
176 -{{/code}}
177 -)))
139 +sudo usermod -a -G jottad backup}}}
178 178  
179 -Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
180 180  
181 -(((
182 -
183 -
184 -{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
185 -[Unit]
142 +Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 
143 +{{code language="ini" title="/usr/lib/systemd/user/jottad.service"}}[Unit]
186 186  Description=Jotta client daemon
187 187  
188 188  [Service]
... ... @@ -195,9 +195,7 @@
195 195  Restart=on-failure
196 196  
197 197  [Install]
198 -WantedBy=default.target
199 -{{/code}}
200 -)))
156 +WantedBy=default.target{{/code}}
201 201  
202 202  == Flaws ==
203 203