Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
20.1
edited by Kevin Wiki
on 2024/04/06 14:02
on 2024/04/06 14:02
Change comment:
There is no comment for this version
To version
14.1
edited by Kevin Wiki
on 2024/04/06 11:14
on 2024/04/06 11:14
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,7 +2,7 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 -= Backup Server configuration =5 += Configuration = 6 6 7 7 Backup server is setup with: 8 8 ... ... @@ -19,45 +19,36 @@ 19 19 20 20 There are currently 2 x 8TB WD drives. Current pool status: 21 21 22 -((( 23 -{{code language="none"}} 24 -kevin@clio:~$ sudo zpool status pergamum 25 -pool: pergamum 26 -state: ONLINE 27 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 22 +{{{kevin@clio:~$ sudo zpool status pergamum 23 + pool: pergamum 24 + state: ONLINE 25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 28 28 config: 29 - NAME STATE READ WRITE CKSUM 30 - pergamum ONLINE 0 0 0 31 - raidz1-0 ONLINE 0 0 0 32 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 33 - sdc1 ONLINE 0 0 0 34 -errors: No known data errors 35 -{{/code}} 36 -))) 37 37 28 + NAME STATE READ WRITE CKSUM 29 + pergamum ONLINE 0 0 0 30 + raidz1-0 ONLINE 0 0 0 31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 32 + sdc1 ONLINE 0 0 0 38 38 34 +errors: No known data errors}}} 35 + 36 + 39 39 === Creating and expanding zfs pool === 40 40 41 -((( 42 -{{code language="none"}} 39 +``` 43 43 zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 44 44 zfs set mountpoint=/mnt/pergamum pergamum 45 45 (zpool import -c /etc/zfs/zpool.cache -aN) 46 46 zpool export pergamum 47 -{{/code}} 48 -))) 44 +``` 49 49 50 50 51 -((( 52 52 have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 53 - 54 -{{code language="none"}} 48 +``` 55 55 zpool add -n pergamum raidz DISK1 DISK2 56 -{{/code}} 57 - 58 - 50 +``` 59 59 ~> NOTE! `-n` is dry run, remove to commit. 60 -))) 61 61 62 62 63 63 == Access Control == ... ... @@ -131,79 +131,40 @@ 131 131 Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. 132 132 To view the status: `sudo tailscale status`. 133 133 134 -{{code language="bash"}} 135 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null 136 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 125 += Client Configuration = 137 137 138 -sudo apt-get update 139 -sudo apt-get install tailscale 127 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 128 +\\{{code language="none" width="100%"}}Selection mode: include selected VMs 129 +Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com 130 +Email: On failure only 131 +Mode: Snapshot 132 +Enabled: True 133 +Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}} 140 140 141 -systemctl status tailscaled.service 142 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud --authkey AUTHKEY 143 -tailscale status 144 -{{/code}} 135 += Methodology = 145 145 146 - =Jottacloud client=137 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 147 147 148 -Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 149 -NOTE! This setup still uses user `kevin` and not the correct jottad user. 139 +== Sub-paragraph == 150 150 151 -((( 152 -{{code language="none"}} 153 -# install jotta-cli 154 -sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 155 -echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 156 -sudo apt-get update 157 -sudo apt-get install jotta-cli 141 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 158 158 159 -# configure runtime environment 160 -sudo useradd -m jottad 161 -sudo usermod -a -G jottad backup 162 -{{/code}} 163 -))) 143 += Proxmox backup server = 164 164 165 - Createsystemdfile: `/usr/lib/systemd/user/jottad.service ` and enable with :145 += = 166 166 167 -((( 168 - 147 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 169 169 170 -{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}} 171 -[Unit] 172 -Description=Jotta client daemon 149 +== Sub-paragraph == 173 173 174 -[Service] 175 -Type=notify 176 -# Group=backup 177 -# UMask=0002 151 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 178 178 179 -# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env 180 -ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/ 181 -Restart=on-failure 153 +== Sub-paragraph == 182 182 183 -[Install] 184 -WantedBy=default.target 185 -{{/code}} 155 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 186 186 ))) 187 187 188 -== Flaws == 189 189 190 -Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction. 191 - 192 -It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. 193 - 194 -= Client Configuration = 195 - 196 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 197 - 198 -* selection mode: include selected VMs 199 -* send email to: [[[email protected]>>mailto:[email protected]]] 200 -* email: on failure only 201 -* mode: snapshot 202 -* enabled: true 203 -* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}} 204 -))) 205 - 206 - 207 207 (% class="col-xs-12 col-sm-4" %) 208 208 ((( 209 209 {{box title="**Contents**"}}