Changes for page Proxmox Backup server
Last modified by Kevin Wiki on 2024/05/21 21:23
From version
6.1
edited by Kevin Wiki
on 2024/04/06 10:12
on 2024/04/06 10:12
Change comment:
There is no comment for this version
To version
17.1
edited by Kevin Wiki
on 2024/04/06 12:48
on 2024/04/06 12:48
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -2,117 +2,189 @@ 2 2 ((( 3 3 (% class="col-xs-12 col-sm-8" %) 4 4 ((( 5 -= Configuration =5 += Backup Server configuration = 6 6 7 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 8 -\\{{code language="none" width="100%"}}Selection mode: include selected VMs 9 -Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com 10 -Email: On failure only 11 -Mode: Snapshot 12 -Enabled: True 13 -Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}} 7 +Backup server is setup with: 14 14 15 -= {{code language="yaml" layout="LINENUMBERS"}}apiVersion: apps/v1 16 -kind: DaemonSet 17 -metadata: 18 - name: prometheus-node-exporter-daemonset 19 - namespace: monitoring 20 - labels: 21 - app: prometheus-node-exporter 22 -spec: 23 - selector: 24 - matchLabels: 25 - app: prometheus-node-exporter 26 - template: 27 - metadata: 28 - labels: 29 - app: prometheus-node-exporter 30 - spec: 31 - containers: 32 - - args: 33 - - --path.procfs=/host/proc 34 - - --path.sysfs=/host/sys 35 - - --path.rootfs=/host/root 36 - - --web.listen-address=:9100 37 - image: quay.io/prometheus/node-exporter:latest 38 - imagePullPolicy: IfNotPresent 39 - name: prometheus-node-exporter 40 - ports: 41 - - name: metrics 42 - containerPort: 9100 43 - hostPort: 9100 44 - securityContext: 45 - allowPrivilegeEscalation: false 46 - volumeMounts: 47 - - mountPath: /host/proc 48 - name: proc 49 - readOnly: true 50 - - mountPath: /host/sys 51 - name: sys 52 - readOnly: true 53 - - mountPath: /host/root 54 - mountPropagation: HostToContainer 55 - name: root 56 - readOnly: true 57 - hostNetwork: true 58 - hostPID: true 59 - restartPolicy: Always 60 - tolerations: 61 - - key: "node-role.kubernetes.io/master" 62 - effect: "NoSchedule" 63 - volumes: 64 - - hostPath: 65 - path: /proc 66 - type: "" 67 - name: proc 68 - - hostPath: 69 - path: /sys 70 - type: "" 71 - name: sys 72 - - hostPath: 73 - path: / 74 - type: "" 75 - name: root 76 - updateStrategy: 77 - rollingUpdate: 78 - maxSurge: 0 79 - maxUnavailable: 1 80 - type: RollingUpdate{{/code}} = 9 +* zfs storage 10 +* access control - api tokens 11 +* datastore 12 +** sync jobs 13 +** prune jobs 14 +** verify jobs 15 +** permissions 16 +* timings and simulator 81 81 82 -= Methodology =18 +== ZFS storage array == 83 83 84 - Lorem ipsum dolorsit amet,consecteturadipiscingelit,sed do eiusmod tempor incididunt ut labore et doloremagna aliqua. Utenim ad minim veniam, quis nostrud exercitation ullamcolaborisnisi ut aliquip exeacommodo consequat.Duisaute iruredolorin reprehenderit involuptatevelit esse cillum dolore eu fugiat nulla pariatur.Excepteursint occaecat cupidatat non proident,sunt in culpa quiofficia deserunt mollitanim id estlaborum.20 +There are currently 2 x 8TB WD drives. Current pool status: 85 85 86 -== Sub-paragraph == 22 +{{{kevin@clio:~$ sudo zpool status pergamum 23 + pool: pergamum 24 + state: ONLINE 25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024 26 +config: 87 87 88 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 28 + NAME STATE READ WRITE CKSUM 29 + pergamum ONLINE 0 0 0 30 + raidz1-0 ONLINE 0 0 0 31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0 32 + sdc1 ONLINE 0 0 0 89 89 90 - = Proxmoxbackupserver=34 +errors: No known data errors}}} 91 91 92 -= = 93 93 94 - Loremipsum dolorsit amet, consecteturadipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magnaaliqua. Ut enim adminim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquipex eacommodo consequat. Duis aute iruredolorinreprehenderit in voluptate velit esse cillum dolore eu fugiatnulla pariatur. Excepteursintoccaecat cupidatat non proident, sunt in culpaqui officia deserunt mollit anim id est laborum.37 +=== Creating and expanding zfs pool === 95 95 96 -== Sub-paragraph == 39 +``` 40 +zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f 41 +zfs set mountpoint=/mnt/pergamum pergamum 42 +(zpool import -c /etc/zfs/zpool.cache -aN) 43 +zpool export pergamum 44 +``` 97 97 98 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 99 99 100 -== Sub-paragraph == 47 +have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool: 48 +``` 49 +zpool add -n pergamum raidz DISK1 DISK2 50 +``` 51 +~> NOTE! `-n` is dry run, remove to commit. 101 101 102 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 53 + 54 +== Access Control == 55 + 56 +Each client host that wants to backup their contents to the backup server should have their unique API token for authentication. 57 + 58 +API Token: 59 + 60 +* user: [[root@pam>>mailto:root@pam]] 61 +* token name: CLIENT_NAME 62 +* expire: never 63 +* enabled: true 64 + 65 +Permissions - Add a API Token Permission: 66 + 67 +* path: /datastore/proxmox-backup/CLIENT_NAME 68 +* api token: root@pam!CLIENT_NAME 69 +* role: DatastoreBackup 70 +* propagate: true 71 + 72 +>Note! The path will not be define until after the Datastore namespace is define in the steps below 73 + 74 +== Proxmox datastore == 75 + 76 +If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here. 77 + 78 +=== Namespace === 79 + 80 +Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above. 81 + 82 +=== Prune & Garbage collect === 83 + 84 +We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is: 85 + 86 +* datastore: proxmox-backup 87 +* namespace: root 88 +* keep last: 4 89 +* keep: hourly: - 90 +* keep daily: 6 91 +* keep weekly: 3 92 +* keep monthly: 6 93 +* keep yearly: 4 94 +* max_depth: full 95 +* prune schedule: 0/6:00 96 +* enabled: true 97 + 98 +=== Verify jobs === 99 + 100 +Current configuration is: 101 + 102 +* local datastore: proxmox-backup 103 +* namespace: root 104 +* max-depth: full 105 +* schedule: daily 106 +* skip verified: true 107 +* re-verify after: 30 days 108 + 109 +=== Permissions === 110 + 111 +Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission: 112 + 113 +* path: /datastore/proxmox-backup/CLIENT_NAME 114 +* API Token: root@pam!CLIENT_NAME 115 +* Role: DatastoreBackup 116 +* Propagate: true 117 + 118 += Tailscale = 119 + 120 +Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server. 121 + 122 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`. 123 +To view the status: `sudo tailscale status`. 124 + 125 += Jottacloud client = 126 + 127 +Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage. 128 +NOTE! This setup still uses user `kevin` and not the correct jottad user. 129 + 130 + 131 +{{{# install jotta-cli 132 +sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg 133 +echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list 134 +sudo apt-get update 135 +sudo apt-get install jotta-cli 136 + 137 +# configure runtime environment 138 +sudo useradd -m jottad 139 +sudo usermod -a -G jottad backup}}} 140 + 141 + 142 +Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with : 143 + 144 +((( 145 +\\ 146 + 147 +{{code layout="LINENUMBERS" language="ini"}} 148 +[Unit] 149 +Description=Jotta client daemon 150 + 151 +[Service] 152 +Type=notify 153 +# Group=backup 154 +# UMask=0002 155 + 156 +# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env 157 +ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/ 158 +Restart=on-failure 159 + 160 +[Install] 161 +WantedBy=default.target 162 +{{/code}} 103 103 ))) 104 104 165 +== Flaws == 105 105 167 +Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction. 168 + 169 +It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge. 170 + 171 += Client Configuration = 172 + 173 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences: 174 + 175 +* selection mode: include selected VMs 176 +* send email to: [[[email protected]>>mailto:[email protected]]] 177 +* email: on failure only 178 +* mode: snapshot 179 +* enabled: true 180 +* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}} 181 +))) 182 + 183 + 106 106 (% class="col-xs-12 col-sm-4" %) 107 107 ((( 108 108 {{box title="**Contents**"}} 109 109 {{toc/}} 110 110 {{/box}} 111 - 112 -[[image:[email protected]]] 113 -//Figure 1: [[Sea>>https://commons.wikimedia.org/wiki/File:Isle_of_Icacos_II.jpg]]// 114 - 115 -[[image:[email protected]]] 116 -//Figure 2: [[Waves>>https://commons.wikimedia.org/wiki/File:Culebra_-_Playa_de_Flamenco.jpg]]// 117 117 ))) 118 118 )))