Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
7.1
edited by Kevin Wiki
on 2024/04/06 10:12
on 2024/04/06 10:12
Change comment:
There is no comment for this version
To version
23.1
edited by Kevin Wiki
on 2024/04/06 14:16
on 2024/04/06 14:16
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,118 +2,226 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 -= Configuration = 5 +(% class="wikigeneratedid" %) 6 +Backups are primarly done through proxmox backup server taking snapshot of running lxc and vm's. These are stored on mirrored ZFS array and synchronized to both off-site location and cloud storage provider. 6 6 7 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 8 -\\{{code language="none" width="100%"}}Selection mode: include selected VMs 9 -Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com 10 -Email: On failure only 11 -Mode: Snapshot 12 -Enabled: True 13 -Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}} 8 += Backup Server configuration = 14 14 15 -(% class="wikigeneratedid" id="H" %) 16 -{{code language="yaml" layout="LINENUMBERS"}}apiVersion: apps/v1 17 -kind: DaemonSet 18 -metadata: 19 - name: prometheus-node-exporter-daemonset 20 - namespace: monitoring 21 - labels: 22 - app: prometheus-node-exporter 23 -spec: 24 - selector: 25 - matchLabels: 26 - app: prometheus-node-exporter 27 - template: 28 - metadata: 29 - labels: 30 - app: prometheus-node-exporter 31 - spec: 32 - containers: 33 - - args: 34 - - --path.procfs=/host/proc 35 - - --path.sysfs=/host/sys 36 - - --path.rootfs=/host/root 37 - - --web.listen-address=:9100 38 - image: quay.io/prometheus/node-exporter:latest 39 - imagePullPolicy: IfNotPresent 40 - name: prometheus-node-exporter 41 - ports: 42 - - name: metrics 43 - containerPort: 9100 44 - hostPort: 9100 45 - securityContext: 46 - allowPrivilegeEscalation: false 47 - volumeMounts: 48 - - mountPath: /host/proc 49 - name: proc 50 - readOnly: true 51 - - mountPath: /host/sys 52 - name: sys 53 - readOnly: true 54 - - mountPath: /host/root 55 - mountPropagation: HostToContainer 56 - name: root 57 - readOnly: true 58 - hostNetwork: true 59 - hostPID: true 60 - restartPolicy: Always 61 - tolerations: 62 - - key: "node-role.kubernetes.io/master" 63 - effect: "NoSchedule" 64 - volumes: 65 - - hostPath: 66 - path: /proc 67 - type: "" 68 - name: proc 69 - - hostPath: 70 - path: /sys 71 - type: "" 72 - name: sys 73 - - hostPath: 74 - path: / 75 - type: "" 76 - name: root 77 - updateStrategy: 78 - rollingUpdate: 79 - maxSurge: 0 80 - maxUnavailable: 1 81 - type: RollingUpdate{{/code}} 10 +Backup server is setup with: 82 82 83 -= Methodology = 12 +* zfs storage 13 +* access control - api tokens 14 +* datastore 15 +** sync jobs 16 +** prune jobs 17 +** verify jobs 18 +** permissions 19 +* timings and simulator 84 84 85 - Loremipsumdolorsitamet, consecteturadipiscingelit,sed do eiusmod tempor incididunt ut labore et dolore magnaaliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.21 +== ZFS storage array == 86 86 87 - ==Sub-paragraph==23 +There are currently 2 x 8TB WD drives. Current pool status: 88 88 89 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 25 +((( 26 +{{code language="none"}} 27 +kevin@clio:~$ sudo zpool status pergamum 28 +pool: pergamum 29 +state: ONLINE 30 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 31 +config: 32 + NAME STATE READ WRITE CKSUM 33 + pergamum ONLINE 0 0 0 34 + raidz1-0 ONLINE 0 0 0 35 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 36 + sdc1 ONLINE 0 0 0 37 +errors: No known data errors 38 +{{/code}} 39 +))) 90 90 91 -= Proxmox backup server = 92 92 93 -= = 42 +=== Creating and expanding zfs pool === 94 94 95 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 44 +((( 45 +{{code language="none"}} 46 +zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 47 +zfs set mountpoint=/mnt/pergamum pergamum 48 +(zpool import -c /etc/zfs/zpool.cache -aN) 49 +zpool export pergamum 50 +{{/code}} 51 +))) 96 96 97 -== Sub-paragraph == 98 98 99 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 54 +((( 55 +have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 100 100 101 -== Sub-paragraph == 57 +{{code language="none"}} 58 +zpool add -n pergamum raidz DISK1 DISK2 59 +{{/code}} 102 102 103 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 61 + 62 +~> NOTE! `-n` is dry run, remove to commit. 104 104 ))) 105 105 106 106 66 +== Access Control == 67 + 68 +Each client host that wants to backup their contents to the backup server should have their unique API token for authentication. 69 + 70 +API Token: 71 + 72 +* user: [[root@pam>>mailto:root@pam]] 73 +* token name: CLIENT_NAME 74 +* expire: never 75 +* enabled: true 76 + 77 +Permissions - Add a API Token Permission: 78 + 79 +* path: /datastore/proxmox-backup/CLIENT_NAME 80 +* api token: root@pam!CLIENT_NAME 81 +* role: DatastoreBackup 82 +* propagate: true 83 + 84 +>Note! The path will not be define until after the Datastore namespace is define in the steps below 85 + 86 +== Proxmox datastore == 87 + 88 +If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here. 89 + 90 +=== Namespace === 91 + 92 +Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above. 93 + 94 +=== Prune & Garbage collect === 95 + 96 +We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is: 97 + 98 +* datastore: proxmox-backup 99 +* namespace: root 100 +* keep last: 4 101 +* keep: hourly: - 102 +* keep daily: 6 103 +* keep weekly: 3 104 +* keep monthly: 6 105 +* keep yearly: 4 106 +* max_depth: full 107 +* prune schedule: 0/6:00 108 +* enabled: true 109 + 110 +=== Verify jobs === 111 + 112 +Current configuration is: 113 + 114 +* local datastore: proxmox-backup 115 +* namespace: root 116 +* max-depth: full 117 +* schedule: daily 118 +* skip verified: true 119 +* re-verify after: 30 days 120 + 121 +=== Permissions === 122 + 123 +Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission: 124 + 125 +* path: /datastore/proxmox-backup/CLIENT_NAME 126 +* API Token: root@pam!CLIENT_NAME 127 +* Role: DatastoreBackup 128 +* Propagate: true 129 + 130 += Tailscale = 131 + 132 +Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. 133 + 134 +{{code language="bash"}} 135 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null 136 +curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list 137 + 138 +sudo apt-get update 139 +sudo apt-get install tailscale 140 + 141 +systemctl status tailscaled.service 142 +sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud 143 +tailscale status 144 +{{/code}} 145 + 146 +Connect to headscale login server: 147 + 148 +{{code language="none"}} 149 +$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud 150 + 151 +To authenticate, visit: 152 + 153 + https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f 154 + 155 +Success. 156 +{{/code}} 157 + 158 +View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]] 159 + 160 += Jottacloud client = 161 + 162 +Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 163 +NOTE! This setup still uses user `kevin` and not the correct jottad user. 164 + 165 +((( 166 +{{code language="none"}} 167 +# install jotta-cli 168 +sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 169 +echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 170 +sudo apt-get update 171 +sudo apt-get install jotta-cli 172 + 173 +# configure runtime environment 174 +sudo useradd -m jottad 175 +sudo usermod -a -G jottad backup 176 +{{/code}} 177 +))) 178 + 179 +Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 180 + 181 +((( 182 + 183 + 184 +{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}} 185 +[Unit] 186 +Description=Jotta client daemon 187 + 188 +[Service] 189 +Type=notify 190 +# Group=backup 191 +# UMask=0002 192 + 193 +# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env 194 +ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/ 195 +Restart=on-failure 196 + 197 +[Install] 198 +WantedBy=default.target 199 +{{/code}} 200 +))) 201 + 202 +== Flaws == 203 + 204 +Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction. 205 + 206 +It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. 207 + 208 += Client Configuration = 209 + 210 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 211 + 212 +* selection mode: include selected VMs 213 +* send email to: [[[email protected]>>mailto:[email protected]]] 214 +* email: on failure only 215 +* mode: snapshot 216 +* enabled: true 217 +* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}} 218 +))) 219 + 220 + 107 107 (% class="col-xs-12 col-sm-4" %) 108 108 ((( 109 109 {{box title="**Contents**"}} 110 110 {{toc/}} 111 111 {{/box}} 112 - 113 -[[image:[email protected]]] 114 -//Figure 1: [[Sea>>https://commons.wikimedia.org/wiki/File:Isle_of_Icacos_II.jpg]]// 115 - 116 -[[image:[email protected]]] 117 -//Figure 2: [[Waves>>https://commons.wikimedia.org/wiki/File:Culebra_-_Playa_de_Flamenco.jpg]]// 118 118 ))) 119 119 )))