Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 20.1
edited by Kevin Wiki
on 2024/04/06 14:02
Change comment: There is no comment for this version
To version 18.1
edited by Kevin Wiki
on 2024/04/06 12:48
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -19,45 +19,36 @@
19 19  
20 20  There are currently 2 x 8TB WD drives. Current pool status:
21 21  
22 -(((
23 -{{code language="none"}}
24 -kevin@clio:~$ sudo zpool status pergamum
25 -pool: pergamum
26 -state: ONLINE
27 -  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
22 +{{{kevin@clio:~$ sudo zpool status pergamum
23 + pool: pergamum
24 + state: ONLINE
25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
28 28  config:
29 -        NAME                                            STATE     READ WRITE CKSUM
30 -        pergamum                                        ONLINE       0     0     0
31 -          raidz1-0                                      ONLINE       0     0     0
32 -            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
33 -            sdc1                                        ONLINE       0     0     0
34 -errors: No known data errors
35 -{{/code}}
36 -)))
37 37  
28 + NAME STATE READ WRITE CKSUM
29 + pergamum ONLINE 0 0 0
30 + raidz1-0 ONLINE 0 0 0
31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 + sdc1 ONLINE 0 0 0
38 38  
34 +errors: No known data errors}}}
35 +
36 +
39 39  === Creating and expanding zfs pool ===
40 40  
41 -(((
42 -{{code language="none"}}
39 +```
43 43  zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
44 44  zfs set mountpoint=/mnt/pergamum pergamum
45 45  (zpool import -c /etc/zfs/zpool.cache -aN)
46 46  zpool export pergamum
47 -{{/code}}
48 -)))
44 +```
49 49  
50 50  
51 -(((
52 52  have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
53 -
54 -{{code language="none"}}
48 +```
55 55  zpool add -n pergamum raidz DISK1 DISK2
56 -{{/code}}
57 -
58 -
50 +```
59 59  ~> NOTE! `-n` is dry run, remove to commit.
60 -)))
61 61  
62 62  
63 63  == Access Control ==
... ... @@ -131,26 +131,13 @@
131 131  Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
132 132  To view the status: `sudo tailscale status`.
133 133  
134 -{{code language="bash"}}
135 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
136 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
137 -
138 -sudo apt-get update
139 -sudo apt-get install tailscale
140 -
141 -systemctl status tailscaled.service
142 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud --authkey AUTHKEY
143 -tailscale status
144 -{{/code}}
145 -
146 146  = Jottacloud client =
147 147  
148 148  Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
149 149  NOTE! This setup still uses user `kevin` and not the correct jottad user.
150 150  
151 -(((
152 -{{code language="none"}}
153 -# install jotta-cli
130 +
131 +{{{# install jotta-cli
154 154  sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
155 155  echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
156 156  sudo apt-get update
... ... @@ -158,10 +158,9 @@
158 158  
159 159  # configure runtime environment
160 160  sudo useradd -m jottad
161 -sudo usermod -a -G jottad backup
162 -{{/code}}
163 -)))
139 +sudo usermod -a -G jottad backup}}}
164 164  
141 +
165 165  Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
166 166  
167 167  (((